Thursday, September 12, 2013

Join Calyptix at the ASCII Success Summit in Austin, TX!

We're excited to be at the 2013 ASCII Success Summit on September 18th and 19th in Austin, Texas, and we welcome you to join us!

As an IT service provider, we know that you value partnerships with companies that you can trust.  Come visit our booth and learn more about how Calyptix establishes true service partnerships with our partners -- we provide not only an excellent managed security solution and top-quality service for your SMB clients, but a business relationship that matches your business model.

We build so much more than just a firewall, and we're eager to show what we can offer to you.

See you in Austin!

Wednesday, July 10, 2013

Risk Alert: Default passwords may give attackers a backdoor

The US-CERT recently released an advisory outlining the danger of using default passwords on the Internet with remote access. The advisory points to the prevalence of the risk, especially in embedded devices such as routers and firewalls.

Just recently, a backdoor account with a default password was found in one of HP's storage products that is used by a wide range of organizations, including SMBs, large enterprises, and cloud service providers. In January, a similar backdoor account was found in Barracuda's appliances.

(UPDATE: This Slashdot article published Friday indicates the same common password was used across even more HP devices.)

This problem is certainly not new. In 2010, a team at Columbia University scanned large portions of the Internet and found more than 540,000 publicly accessible embedded devices with factory default root passwords.

These default accounts and passwords for remote access make it trivial for an attacker to compromise the networks of multiple organizations, whether to send spam, steal intellectual property, or obtain valuable financial data.

Where does Calyptix stand? Since the very beginning, every AccessEnforcer unit is manufactured using a unique admin password that is specific to that device. This means practically no two AccessEnforcer units will ever have the same password. Even if the attacker obtains the password of one AccessEnforcer, that password cannot be used to obtain access to another AccessEnforcer.  To ensure our ability to provide timely and effective support, each AccessEnforcer has a unique account for remote vendor support, which can be easily disabled by the customer.

For the unified firewalls made by Calyptix, we have the following principles and believe they provide the best benefits of security and support. We strongly urge you to confirm that your other security vendors follow similar guidelines:

1.      Tell the customer about any remote access. 

2.      Give your customer the ability to turn off or restrict remote access.

3.      Ensure credentials are unique on each device.

4.      Implement strong password practices (for example key pairs or long non-dictionary passwords).

Password security is a vital part of protecting your business and customers. Have questions or comments? You can reach us at (800) 650-8930 and info@calyptix.com.

Wednesday, August 1, 2012

PPTP is so insecure, it should be considered unencrypted

Security researchers Moxie Marlinspike and David Hulton have presented findings showing the MS-CHAPv2 authentication protocol can be broken with a 100% success rate, and have publicly released the tools for anyone to do so. This protocol is used in WPA2 Enterprise encryption, as well as almost all PPTP VPN implementations. If you're still using a PPTP VPN, be aware that anyone sniffing your traffic can crack it and gain access to your network. The researchers say that PPTP traffic should essentially be considered unencrypted.

The AccessEnforcer has never implemented PPTP, instead giving the administrator the ability to easily deploy CalyptixVPN clients (based on OpenVPN), as well as create secure IPSEC tunnels. Both of these protocols are recommended by the researchers as alternatives.

Tuesday, May 15, 2012

Why You May Actually Need a Firewall

Today, a client sent us this article: Why you don't need a firewall. Now of course, we sell an appliance that includes a firewall, so I admit bias in this matter. On the other hand, my sysadmin side loves the idea of less devices to manage. Sadly, this editorial's justifications lack logic and reason.

The author, Roger A. Grimes, first argues that software is less hackable today than it was in the past, thereby reducing the need for a firewall. This couldn't be further from the truth. Every single security bulletin published by Microsoft in the last year contains one or more remote code execution vulnerabilities. As long as new code is written, new bugs will inadvertently be created. I doubt Mr. Grimes is a proponent of leaving servers unpatched, but if that was his argument, a firewall that blocks access to a server's vulnerable services from outside the network could certainly prevent them from being exploited. Even if you do aggressively patch your servers, each month's security bulletin informs you of the exploits your servers and workstations have been vulnerable to up until the moment they're patched.

Mr. Grimes' next argument is that 'firewalls tend to be horribly managed,' as 'almost no one reads the logs or responds to the events recorded.' This is clearly an administrator problem. Although some devices are easier to use than others, I know of at least one vendor that works hard to make the task of remediating network alerts as simple as possible.

But he continues: "I find so many firewalls with "ANY ANY" rules that defang the protection, it doesn't faze me anymore." This is indeed a useless rule set. The AccessEnfocer blocks all inbound traffic by default, and with three clicks, blocks all outbound traffic as well. This has been our recommended practice since inception of the company, and our customers know we have articles on our Online Portal that explain in simple terms how to lock down a network using a 'default deny' rule without hurting users or blocking critical network traffic.

His next argument against ports 80 and 443 being too open is really just a continuation of the last one. Any decent firewall will help you lock down these ports, and more. Between a web filter that proxies all web traffic, to an intrusion prevention engine that inspects and blocks it, to a content filter that blocks known malware sites as well as certain types of downloads, a UTM device can reduce the attack surface significantly. Of course, no system is perfect, which is why security experts recommend multiple layers of security, and of course, continuous signature/definition updates.

Mr. Grimes' own examples even work against him. "The recent Remote Desktop Protocol exploit is a case in point: Microsoft recommended that affected clients block RDP port 3389 at perimeter firewalls as one of their protective work-arounds. But everyone I know, instead, installed the emergency patch," he writes. However, the flaw was first discovered in May 2011. If administrators had previously blocked port 3389, their servers would never have even been publicly vulnerable.

I once worked for a university IT department that assigned public IP addresses via DHCP to every internal computer. When deploying new machines, we would first bring them online on a dedicated, firewalled network in order for them to receive their patches and security policies. We once accidentally connected a batch of new laptops to our WiFi network by accident. Shortly thereafter, we received a call from the Security Office saying we had a bunch of IP's on our network spewing out spam and malware. The total time that had elapsed between putting these laptops on an un-firewalled network and receiving that call: five minutes.

Wednesday, April 11, 2012

Running OpenBSD in a VirtualBox VM full-screen at 1920x1080 Resolution

VirtualBox and OpenBSD have progressed quite a bit since our last post on the subject. With current versions of the software, which at the time of this writing are OpenBSD 5.0 and VirtualBox 4.1.x, no special tweaking is required to install the OS, boot the VM, or even start the X Window System. However, if you'd like to run X full-screen on a 1920x1080 monitor, a minor bit of tweaking is required:
  1. Increase the VM's Video Memory allocation to at least 32MB in its settings.
  2. Set the new VESA mode in VirtualBox. The syntax is:
        VBoxManage setextradata [VM-name] CustomVideoMode1 [WidthxHeightxBitDepth]
    Depending on your OS, you may need to type the full path to the VBoxManage command. On my computer, the exact command I typed was:
        /Applications/VirtualBox.app/Contents/MacOS/VBoxManage setextradata taco
        CustomVideoMode1 1920x1080x16
    This command provides no confirmation output, but you can verify it by running:
        VBoxManage getextradata [VM-name] CustomVideoMode1
    which gave me:
        Value: 1920x1080x16
  3. Edit your xorg.conf file to add the following lines under the "Monitor" section:
        HorizSync     31-80
        VertRefresh   30-100
  4. Set the DefaultDepth to "16" under the "Screen" section by adding the bottom line:
        Section "Screen"
            Identifier   "Screen0"
            Device       "Card0"
            Monitor      "Monitor0"
            DefaultDepth 16
  5. Towards the bottom of the "Screen" section you'll find the definition for the 16-bit mode. Define your resolution here:
        SubSection "Display"
            Viewport     0 0
            Depth        16
            Modes        "1920x1080"
        EndSubSection
That's it. Run startx and you'll have a large window with scrollbars, so simply press your Host+F key and you'll have a fullscreen desktop with no scrolling.

Unfortunately, I wasn't able to get 24-bit working at 1920x1080 in VirtualBox. 24-bit does work for lower resolutions (up to 1280x1024 on my machine), which makes me think this is a VirtualBox limitation.

Tuesday, April 3, 2012

Getting Started with Cyphertite Remote Backup

Cyphertite is a new remote backup tool from Conformal Systems that, in many ways, is similar to other popular services such as Backblaze, Mozy, Carbonite, and JungleDisk. Linux and FreeBSD users will find similarities with tarsnap. It's a bit of a crowded market, no? However, there are four attributes that set apart Cyphertite:
  1. Storage that's cheaper than AWS at $0.10/GB for storage (no charge for bandwidth)
  2. The amazing combination of compression, deduplication, and client-side encryption
  3. Strong cryptography (White Paper - PDF)
  4. An open source client open for anyone to review
Since cyphertite may not yet be well-known in the general IT industry, I thought it would be helpful to explain exactly what it is and how it works. This post actually grew out of my notes for myself as I began setting it up and using it internally. Since the Windows and Mac OS X clients aren't ready yet, this howto guide assumes a Unix or Linux server (I used OpenBSD). Even if you only need to back up Windows files, you can install Services for NFS for Windows Server, set up OpenBSD on an old spare box or VM and mount the NFS share on it.

Also, please note that cyphertite has a wiki, live chat and email support from Conformal, a forum, and man pages. Although it took me some time to grok everything, I had no problem getting help thanks to these resources. The developers were even kind enough to review this guide for accuracy. If you're not familiar with Conformal, you may also want to check out some of their other initiatives.

How it Works

Cyphertite behaves similar to tar in that it archives files and uses similar flags. However, instead of operating on tarballs, you work with "ctfiles" which are not actually file archives, but more like a representation of the files that have been encrypted and archived remotely. During the backup process, cyphertite: 
  1. Breaks up every file into 256KB chunks.
  2. Records the chunk's SHA-1 hash in a local sqlite database.
  3. Compresses the chunk.
  4. Encrypts the chunk using AES-XTS.
  5. Records the encrypted chunk's SHA-1 hash in the localdb.
  6. Records metadata in a ctfile about the file being archived, including its path, permissions, and list of hashes that belong to the chunks that make it up.
  7. Archives the chunks at the remote backup destination. 
If a chunk's hash pair matches a previously uploaded chunk, it is not uploaded again, thus implementing "chunk-level" deduplication. Another benefit of the hashing is file integrity verification. So while the database is a collection of hashes of the chunks of your files that have been backed up, the ctfiles are necessary to recreate the files from chunks when you're performing a restore. The ctfiles end in .ct and are generally stored remotely. However, since they can grow to a significant size, they are cached locally as well, in what is referred to as the "metadata cache directory." It is possible to disable 'remote mode', in which the ctfiles aren't stored on the remote server. This is called 'local mode', and the ctfiles will not be backed up, so if you lose them, you will not be able to recover your data. In addition, you won't have the ability to cull (delete) old data. Since the ctfiles are encrypted with the same algorithm used to encrypt the chunks, Conformal doesn't have access to this metadata, so there is virtually no downside remote mode. When performing operations on a ctfile, the full path of the ctfile is not passed, simply the name of it. Cyphertite checks its .conf file for the path to the cache directory, or else uses the remote ctfile.

Getting Started  

First, download and install cyphertite, and complete initial setup. Now, let's run a full backup of our /home directory:
  ct -cvRf allhomefiles_1.ct /home The ctfile will be generated and stored in the cache directory, which will be prepended with a timestamp, called a 'tag', so it will look like 20120330-143853-allhomefiles_1.ct. If you have the following line in your cyphertite.conf file: ctfile_remote_auto_differential = 1 Running the same exact command subsequently will create a new incremental ctfile with the same basic file name, but with a new tag, and with a vastly smaller file size. I recommend this for typical usage, as you will save yourself a lot of processing time by only calculating hashes and writing metadata for new or changed files. The new ctfile will thus only reference these new or changed files, and only chunks from these would be uploaded. Running the following command will force a new full or "Level 0" backup, which then creates a new full ctfile: ct -0vRf allhomefiles_1.ct /home 
However, since you are still referencing the original ctfile, cyphertite will reference metadata from it and all subsequent incremental ctfiles, and will only archive new chunks. The new ctfile, however, will contain all the metadata, not an incremental amount.
Think of this as a full backup, and that cyphertite was smart enough to upload only what it needs. Remember that remotely, data is stored just as encrypted chunks, not in any sort of file or path structure. The new level 0 ctfile will reference all the chunks it needs to recreate every file you wanted to back up. So when the original level 0 ctfile, and subsequent incremental ctfiles, are deleted, and the data is culled, everything associated with this new level 0 ctfile will remain intact. Let's list all remote ctfiles (the -m flag stands for 'remote'): ct -mt You can manually delete a ctfile (I'll show how this can be automated shortly). To remove a remote ctfile, check the output from the above command for the exact filename, and type: ct -mef 20120330-143853-allhomefiles_1.ct Note that the -e flag stands for 'erase'. You can then delete the locally cached ctfile as well. To remove all the chunks of data associated with the deleted ctfiles, that are not referenced by any other ctfiles, run: ctctl cull This will also cull anything that doesn't meet the parameters stored in the following options in cyphertite.conf: ctfile_max_differentials = 29 ctfile_cull_keep_days = 30 The first one, despite its name, will run 29 incremental backups, then force a level 0, then run 29 more incremental backups, etc. Assuming a daily job, it will run a full backup once a month, and incrementals in between. The second line will auto-delete old ctfiles for you after the specified time period, as well as their associated data (if unreferenced by other ctfiles). Our example above tells cyphertite to keep only 30 days worth of ctfiles and data.
With the above two options set, you can effectively run cyphertite from a single, repeated command, and it will create new level 0 ctfiles and cull your old data.
In general, you should schedule 'ctctl cull' to run about as often as you run level 0 backups. Culling is an 'expensive' operation as it must identify all the hashes of all the blocks to save. Since most data sets barely changes over time, and culling is a resource-intensive operation, it's usually worse to cull daily than, say, weekly or monthly.

Schedule It

Typical usage would be to run a regular incremental backup daily, with a level 0 backup monthly (configured by the ctfile_max_differentials option as noted above), and culling after the regular backup process is finished. Your crontab would look like this: # min hour DoM month DoW command 0 0 * * * ct -cvRf allhomefiles_1.ct /home # Daily incremental 0 12 1 * * ctctl cull # Monthly cull of old ctfiles 0 12 2 * * ctctl cull # Monthly cull of old data Make sure you run the backup command manually the first couple of times, so that you know how long the operation takes to complete, so that you can schedule the jobs in a way that they don't overlap. This is critical, as both cyphertite and cull need access to the database. 
With the above schedule, as long as you have properly set ctfile_max_differentials and ctfile_cull_keep_days, you will have regular full and incremental backups and will automatically delete old data.

Restoring Data

To restore the entire contents of our /home folder from the most recent backup to the root directory: ct -C / -xf allhomefiles_1.ct One thing to note is that cyphertite mimics tar in its trailing-slash (un)awareness, unlike rsync. When archiving a directory, it matters not if you back up /home or /home/. Either way, it will include the parent directory, so keep this in mind when restoring. Thus I am restoring the /home backup to the root directory, so that I don't end up with /home/home/. Note that the above command will restore the latest version of all the files that have ever been archived using that ctfile, including the original level 0 and all subsequent incrementals, even if those user files were later deleted. If you want to restore only the files referenced in the most recent backup, and not the ones that had been deleted prior to it, ensure you have this option set in your cyphertite.conf: ctfile_differential_allfiles = 1 To restore a specific file (or set of files) from a particular date to a directory named 'recovered', we can operate on a tagged ctfile and use a regular expression or glob: ct -C recovered -xf 20120330-162629-allhomefiles_1.ct *important_file.xlsx The prepended * is necessary because glob matching is on the full path, so withhout it, the glob wouldn't match the full path.
If you don't remember the exact name of the file you want to recover, you can use the cyphertitefb command to enter a shell that will allow you to navigate through a virtual filesystem as stored in a ctfile, and restore just the files you need, using standard unix commands:
$ cyphertitefb allhomefiles_1.ct
ct_fb> cd /home/horace
ct_fb> ls
ct_fb> get important_file-new-version3.23-saved-final.xlsx
ct_fb> quit

The important file will have been copied to your local cyphertite server, where you can then present it back to the user. 
I hope you have found this introduction to this amazing tool useful.

Wednesday, March 7, 2012

Update regarding delayed change to 'DNS Changer' servers

Yesterday, I wrote about actions we will be taking to protect our customers and their clients from the DNS Changer IPs. It turns out that at the eleventh hour, a judge extended the deadline to July 9, 2012. This means that everyone now has additional time to clean up any infected machines.

It also means that our temporary blacklisting of those IPs will probably not be effective due to unit reboots. So, we will once again blacklist those IPs on July 9, and I will send out another email reminder at that point as well.

In the mean time, you can easily test to see if a machine has become infected by going to dns-ok.us. If it is, you'll see red:



This site is actually operated by the ISC and is rather clever in its implementation. All of the "bad IP" DNS servers resolve dns-ok.us to 38.68.193.97, while all regular DNS servers will resolve it to 38.68.193.96. Hopefully, your machine uses good DNS servers, and you'll see this instead:



You may want to even email this to your clients and ask them to run it on their machines, to help prevent a mad scurry when the DNS servers are shut down.

We'll provide more information about the IP blacklist in early July.