Episode 128 – Bash is so Shocking!!!

1) Introduction

  • Brian – Fiting with Updates breaking my Wifi but nothing else
  • Walt – Compiling on ARM Architecture

2) What we are up to with Linux?

3) Conclusion

Check out our other work:

OhioLinux Fest October 24-25

The Experimental Nature of Linux

So lately it is off to the races with new releases.  With Ubuntu, Fedora, Red Hat, just to name a few having recently put forth their latest offerings.  This will always be followed by the derivative distributions like Mint and Centos.  So what is so great about all this new stuff?  Well everything of course.  Don’t you want to be on the latest and greatest version of the Kernel?  Don’t you want access to new file systems like BRTFS?

The open source community is great at coming up with new and exciting tools all of the time.  There are no strict rules for the process of creating all of these new features, tools, or products.  Most projects though follow a process of having at least a “stable” and a “development” release of their code.  The developer, documenters and users work together to set the standards that need to be reached before for a new feature, tool or product can move from the development release to the stable release.  To get more real world testers for the new components, some projects migrate what could be called a late stage beta program into the Stable Release.   These additions will normally be marked with some type of experimental tag.  On production servers, you normally want to stay away from anything with this tag unless it is to correct an issue that you are currently dealing with.  At times though, you will need this one new feature or in some cases a driver to make something work.  Do you use the Experimental code?  For the most part our answer is yes as long as you do a lot of testing.  Here is a situation I found myself in, early on in my career:

“I was responsible for managing sendmail gateway servers going to the Internet for my company and we were sending out about 1GB in E-Mail a day.  Not bad, except that we would do mass e-mails to our shipping customers with status updates in the middle of the night.  The storage needed to handle these bursts was growing rapidly.  At the time this was happening, raid SCSI cards were costing us over $5,000 each.  We had 6 -1GB drives in the server, but needed them in a RAID array.  The corporate budget was tight and we could not convince management that a hardware RAID solution was a high enough priority to get into the budget.  At the time, the software raid solutions were just starting to be supported in the Linux kernel world.  The drivers were tagged “experimental”.  After hours of debate amongst the Solaris/Linux Admins, we decided to put this new software raid onto a test system and pump some messages through it.  Everything worked great on the test system.  We upgraded production with the new option and we we all crossed our fingers.  Everyone slept very little that night and we were surprised that it had just worked.  A few months later we had a hardware failure on one of the drives.  We marked the disk for replacement, swapped in a new drive, and rebuilt the array all with the system was processing messages without any downtime.  It was just like a hardware raid card.”

Since the experimental tag was there, we did a lot more research before we tried to deploy it.  We actually e-mailed the lead developer for the project, explained what we were thinking, and asked him if he thought it was ready to go.  He informed us that he had been running the latest version for three months and other than the day his cat danced on his keyboard, he had experienced no issues.  Try to do that with any hardware or software vendor out there today.  We received a personalized response from the developer who knows the code best, which  was impressive.  Nothing we would ever see from the likes of Microsoft or IBM.  The big companies keep their developers as far away from customers as possible, for several reasons.  Not the least of which being that they are all generally grumpy and occasionally frustrated with the way a piece of code is behaving.  (OK I am not saying that Admins like myself, are any better, just that customer service people are supposed to talk to the greater public.)  So we went for it and it worked.  That was pretty cool.  But I am not saying, “Let’s all go out and try some experimental software in a production environment.”  I am just saying that sometimes, just like Gmail and it’s beta tag, the experimental tags get left on a little longer than maybe they need too.  When you have a situation that calls for it, try it.  Remember that Linux and the other open source operating systems were and sometimes still are, experiments.  At the same time, both Windows and MacOSX have their unstable experimental parts also.

You should always use caution when using experimental packages, drivers and tools.  When ever you have a need though you should try the work out on a test server.  After your work is completed, remember to give feedback to the project and tell them why you did, or did not, choose to use their software.  Most people underestimate the power of even a small paragraph of positive encouragement or constructive criticism written to a developer or developers on a project.  Remember that your feedback is the reason that people create open source development in the first place.

Updates, experimental or otherwise, to any software can be stunning if you get what’s promised.  If you get hit by a bug though, it can make your life miserable.  If you are testing this type of software on a machine you are just playing around with, it may not be a big problem.  However, if you are hit with a bug on a critical server when it is in production at your job, it may lead to  an uncomfortable conversation with your boss.  If that happens often enough, you could even lose your job, so remember to practice the rules of being a safe admin.

Linux Security a CTO’s Guide

As a CTO, manager, or technical lead, what questions should you be asking when it comes to securing a Linux server?  Are Linux servers really as secure as everyone says?  What should be the focus of your team when securing servers for your company?  In this CTO Brief, we are going to try and answer some of these questions and possibly a little more.

Are Linux machines really more secure than other servers?

The answer to that question is that it depends.  No computer can ever be made to be completely secure.  Sometimes no matter how secure you make a computer, an inexperienced employee may hand over more information without even realizing what they are doing.  But that will be a discussion for another Brief, and we will now get back to the topic at hand.  Out of the box Windows machines used to be far more insecure than they are today. No matter what, there has always been a need for them to be secured. What has been an issue is that you had be a Windows guru to get it done and still have a stable machine.  Microsoft has worked hard to change this, but Linux started out with a sizable lead.  One of the things that Linux inherited from other forms of Unix was its powerful and mature security model.  If you ignore the basics of security, like weak passwords and opening up insecure services, a Linux machine can easily become a very insecure machine.  It is easy to avoid most of the pitfalls if your team takes their time and thinks through the issues you have to solve.

What questions should you be asking when it comes to securing a linux server?

What is each of your Linux servers going to be used for should be your first question.  Until you know and define what a particular machine is going to do, you cannot determine what needs to be running and which risks are justifiable.  There are no hard fast rules as to which services should go together with other services.  Each environment and situation will have its own unique challenges.  For the rest of the examples here, we will focus on a Web Server on the Internet that also serves as a back up DNS server.  The only service ports on this mythical machine that should be visible to the internet would be Port 80 and 443 for the web, 53 for the DNS service, and 22 so that you can login and manage the server.  All other services or daemons that expose or listen on an ip address available to people not on the machine, should be shut off or configured to listen to the internal ip address of servers local loopback address or internal address).

What if my team uses a Web Based administrative tool, or any other type of remote administration tool, and we cannot live without it.  What should we do?

If your team is using a web based solution to manage servers more effectively, the web server that service is running on should never be visible from the outside world.  How can that still be used then?  With SSH enabled it is easy to map a local port on the admin’s workstation to a remote port where the Web Based Administration software is running.  Once that mapping is established, the admin just points a web browser to the port on their workstation and all requests are forwarded over a secured ssh connection.  For this to be effective though, the management software must only listen to localhost( which is the machine internal address.

How do we audit what we did so far and make sure it doesn’t change?

Depending on the level of security you need, there are a wide range of both closed and open source solutions.  Before spending any money on a closed source tool, I strongly recommend investigating the open source tools available.  We have found tools like Puppet and Nessus, both open source, to be some of the best in the industry both paid and unpaid.  Both projects offer contract support and enterprise feature sets.  Puppet gives you the ability to check and update a systems local configuration.  Nessus on the other hand makes sure that only the services, ports and applications you want are accessible from outside of the machine.  This two factor approach along with service monitoring from a tool like GWOS or Zenoss and a rigorous software update process and schedule will give you a terrific base to start off from.

So what about anti-virus software for Linux.  I heard you don’t need it  Is that true?

Linux or any Unix for that matter are not immune from getting viruses.  There are just considerably fewer viruses that can exploit Linux and the services that run on it.  But the thing about Linux that makes it so hard to write a virus for, is that every person using the machine cannot do damage.  On most systems, only the root user can do the most damaging commands.  If the machine is serving as a file server, for say windows machines on the network, Linux can still pass an infected file around your network if you aren’t using anti-virus software.  ClamAV is the most widely known Anti-Virus tool for Linux.  The tool is opensource and lets you protect both Linux and Windows files from viruses.  Other options do exist and you should do some level of comparison and decide how your company should move forward.

So what’s next?

The next steps after this, is to keep closing any holes identified by Nessus.  Then start moving down through the OS and locking down the file and directory permissions.  If you still need more security, start using tools like app armor which place a shield around your exposed applications to protect the operating system.  From here, the goal of what you are attempting to do will lead you to the next set of tools.

The items we mentioned here are meant to be a base for you to build on.  By completing these steps, your machines will be more secure than than a default Linux machine.  What you need to do from here is up to your security officers, customers, and other businesses your company interacts with.

Zenoss how the big four should do monitoring…

The biggest benefit to both Open Source Producers and consumers is the community.  The Zenoss community is its greatest strengths as we learned in podcast number 21 back on 3/13/2010.  The tool is being used by some large corporate customers right along side an army of small businesses. If you do not have a staff experienced with Zenoss or a large enough staff to properly roll it out, they have the ability to support any size company, for a fair fee of course.  As opposed to Groundwork which is based on Nagios, Zenoss is a completely distinct product.  Zenoss is developed as a blended company that delivers an Open Source and free to use Core product.  Zenoss also offers additional support features through their Enterprise version, for an additional fee though.

So how was Zenoss to use?  Well if you actually read the documentation and watched the videos the tool is straight forward, relatively easy to use and quick to get up and running.  After the normal initial learning curve with the UI you can start to really get to the meat and value of what the product has to offer.  Your mileage may vary, but I started to get the hang of it after watching the videos and spending about fifteen or twenty unfocused hours on it.  As has been my experience with most software, the more you know about this type of software, the easier it will be for you to get up to speed.

Let me state this again, watching the videos helped immensely, so at least start there if you do not want to read the manuals either before or as you are getting this setup.  The UI for Zenoss was the hardest item for me to learn.  While chatting with Matt and Mark from Zenoss, they assured us they understood it was a problem and that we should expect big changes in this area within the next few releases.  Once you have decoded how to work in the app, it really starts to make sense.  I could start to see the logic in what started out as chaos. 

Once you have enough data to work with, I started with just a few days worth, the tool starts to get interesting. Creating custom reports and alerts are so easy that I could easily see people ending up with report overload.  For reports, you tell the tool the server or group of servers you want to report on.  Then you tell it what out of the available metrics you want to report on and how to layout the report. The tool is all Ajax/Web Gui based and it works smoothly, and really is just that easy.

One of the neatest features in Zenoss is the way they handle alerting.  You have the option as a user to setup your own alerts.  Alerts can also be setup for groups as most normal systems do.  Why is that something neat?  I have been in very few IT shops where team members  I worked with, didn’t each have their own pet systems or applications.  Allowing each of them to set up the extra alerting they want, on a one by one basis, is one of the many signs that experienced operational engineers built this system.  There are other little things that support personnel will pick up on that just make you stop and say “WOW, someone really thought of that feature.”  It is these little differences, that as individual items, do not seem like a lot but as a collective you will quickly learn to love about this tool.  

The next big thing with Zenoss is what they call ZenPaks.  ZenPaks are groups of scripts and small applications that add functionality like a plug-in in FireFox.  This is where the strength of the Community really comes in.  I am running an ESXi Server at home on a Core i7 machine I built.  While I love the server, VMWare has intentionally encumbered several of the features that normal ESX has.  One of those is in the area of monitoring.  VMWare intentionally built the system with no SNMP based agent built-in.  With most systems, this means you are just out of luck for checking anything other than if the machine is up and has connectivity to it.  With Zenoss, there is very likely a ZenPak for that.  If a ZenPak does not already exist, there is a group of people in the community that love challenges and are eager to help you create a ZenPak for that.  This level of support is really helping the Zenoss team and community to set themselves apart.

So what didn’t I like about the product?  The UI takes serious effort to master.  The tutorials and hours of videos are a tremendous help while the Zenoss team works to make it more intuitive.  The other issue is the limited support for using SSH.  It is another area we were assured is being addressed, but took me considerable effort to figure out the first time I tried.  By contrast snmp based discovery worked perfectly, assuming that all of your machines are using the same read and write keys or user name and password.  The last minor issue is that several of the services I have running on my test machines were either misidentified, causing a failure after discovery, or missing completely.  This is easy to fix for small environments of less than 50 servers and it won’t take you a long time to correct.  Another feature I missed that would help, is the import feature as a way to add systems to your installation.

Once you have this tool up and running, you really do start forgiving the pain it put you through to get there.  Creating reports quickly and using the event correlation features starts to pay off quickly.  The Zenpaks will help you keep things monitored without having to write something custom.  All and all this is definitely a solid, scalable and flexible system for monitoring.  I suggest that you download the VM and give it  a try.