Should the Baracus Project be a member of your A-Team?

As stated in the Puppet article, we have been investigating alternative solutions to the closed source server build and management options on the market.  Novell has been sponsoring a project called Baracus.  Baracus is an open source project that is trying to become the next generation system for booting, building and managing power used by systems.  It seems like something we should check out so we did.  The project was announced to the public on November 19, 2010.  How good or bad could it really be?

For testing the system out, we choose to download the projects SuSe Studio created VM.  Being  a Novell project, it’s based on OpenSuse 11.2, with all of the setup steps in the documentation done for you.  A few normal admin tasks to change passwords, setup my account and the other normal system stuff got the system up.  To make everything work, there are a few other things you will need to do like setup a DHCP server.  While not required, a DNS server would make life easier also, so keep that in mind when starting out.  With almost any distribution you can set both of these up pretty easily.  I suggest installing webmin if you have never done it before.  It will help you get them up and running, with clean configuration files at least.  The documentation from the project is already at a point where the setup instructions are amazingly complete.  I was able to login to the Web GUI without any issue and start roaming around.  The system is very well thought out.  The look, feel and options included show that this system was built by system administrators for system administratos.   It’s not the prettiest looking interface, but it works and things are grouped logically.  They give you the option of using the WebGUI or command line.  A few of the commands really have to be run via the command line at this point but they seem to be working on improvements in a timely fashion.  There have been two updates since I downloaded this in November, both with noticeable improvements.  

How does it work?
The system does a three step boot process.  The first boot interrogates the system hardware and builds a hardware profile and uploads it to the Baracus Server.  Once uploaded, or registered, you will see the servers MAC Address in a list and be able to view what step the process is at.  The second boot will either boot and bring the server to a halt, allowing you time to choose the configuration options, or start building it if you already have that setup.  The third, fourth if you paused to configure, boot sets the server to boot off of either the local disk or network boot location depending on your choice of configuration.    When we say, “configuration options” here, we mean that you set up almost anything you want to do to a server.  This can be from upgrades/patches to turning it into a net booted system.

The first thing that impressed me with this system was that it’s not just a Suse build system.  As of the writing of this article you can build Debian, OpenSuse, SuSe Linux Enterprise Desktop and Server, Fedora, Red Hat Enterprise Linux, Ubuntu, OpenSolaris, ESX 4.x, Windows 7 and 2008 server, and Xenserver.  There are examples of silent configuration files available for most if not all of the systems listed.  Updating these files and adding them to the database used by Baracus is easy and took a few minutes.

The Virtual Machines only come with OpenSuse pre-installed.  So I set off to figure out how to add Ubuntu.  It turns out it was one command.  Here it is, “basource add –isos –distro ubuntu-10.10-x86_64”.  That’s it.  It goes out and downloads the ISO, puts it in the proper location, creates the needed mount points, and adds it to the database so that you can build servers from it.  If you want to do a silent install of the supported OSes all you have to do is make your modifications to the appropriately named file and issue another command to add that configuration to the database.  In just longer than the time it took the system to download the ISO, I was ready and testing my first build of Ubuntu 10.10 over a network connection with a silent install.  Having spent hours or nights in the past setting up systems to be able to build and boot off of the network, this was pretty impressive.  I have now setup Fedora and multiple OpenSuse versions.

On the network here we built an Ubuntu 10.10 VMWare system in roughly 15 minutes.  We set up custom disk partitions, setup our users, groups and additional software packages.  With a few more changes we had a script setup to update the repos and patch the system.  Then finally set up some scripts to automatically configure Puppet.  Now in less than 20 minutes we can take a raw VMWare Server and have it completely configured and up to date.  Having done all of this in the WebGUI, I tried doing it from the command line. It worked just as well and was actually a little faster.

So it’s so green what is wrong with it?
Really there are very few misses in the WebGUI, documentation and command line.  A few things we believe to be either documentation errata or bugs.  These did not show themselves however until I tried to bend it to what I wanted.  The problems with the WebGUI are mostly that we would like to see better errors to the user in a few odd spots and more AJAX like behaviors.  Having the assigned machine names instead of MAC addresses would be really helpful, as would some other views of the systems.  The groups functionality seemed too hard to use and isn’t offering enough right now.  Most of the documentation seems complete but more documentation on errors and what to do about them needs to be flushed out.  Where we had problems though, it didn’t take long to find and fix the problem.

Our Conclusion
Baracus is a great system that should be an amazing system with just a few cleanup and documentation fixes.  At this time I am not sure it’s really ready for people to use in production.  So we here are Linuxinstall.net say try it but don’t rely on it just yet.

Did we not answer your question?  Please ask it in the comments.

 

Automation – Can to much of a good thing be bad?

Senior  systems administrators on any platform know that automation is the  single fastest way to improve the effectiveness of their team.  Scripts  provide stability, repeatability and reduce the time spent on often  repeated tasks.  If done correctly, automation will make everything more  stable and manageable.  

However,  scripts for managing systems can be a double edged sword.  On one hand,  they make a team highly efficient.  They can help junior admins perform  far above their experience level and free senior admins up to  investigate more difficult problems.  On the other hand though, they can  lead to a loss of knowledge.  The knowledge it took to create the  scripts becomes locked inside of them.  So what do you do to strike the  proper balance?  How can you keep the knowledge fresh in every-one’s  mind while still automating?  What steps can be taken to avoid knowledge  erosion and worse the brain drain or vacuum that is left when people  leave?

The  first thing to remember is that there is no one thing that can be done  to answer these questions.  Here we will provide you with some tips and  ideas we have found to be useful and effective.  This is a short list  and we hope that it will inspire you to think about what might work for  you and your company. 

The  first item is well documented scripts and procedures.  Taking 5 minutes  to write up what you were thinking when you wrote the script can save  you days trying to figure it out later.  As more object oriented  scripting languages like Python, Ruby and Perl take hold, it becomes  easier to break down complex scripts into much easier and digestable  chunks.  These smaller chunks, like the core ideas behind Linux, should  do one thing and do it well.  The names of the functions should describe  what they do.  For instance, a function called createNewSSHKeys, should  probably create new SSH keys.  This combined with an explanation of  what you were trying to do inside the function will help you and others  manage them.  When you get really good at this way of thinking, people  should be able to take your function calls out and write a manual  procedure that could replace your automation.  If that is your goal,  then it only makes sense that starting with a well documented procedure  to compare against when your done scripting makes sense.  It is unlikely  that every procedure step will match a function or series of function  calls.  Getting everything close does count though.

As  much as self documenting scripts helps though,  documenting  configuration files for your scripts can keep things fresh in peoples  mind.  At the very least, if done correctly, it will give them a  breadcrumb trail to follow to see if what they think is being set is  set.  We recently began testing out Puppet, an automated way to manage  server configuration files and other admin related tasks.  The  configuration files for Puppet can be used as a great example.  They  allow you to use a combination of intelligent names and comments to  inform the person reading the file what will be changed.  They also  include a description of where to look to verify that the changes are  being done correctly.  This means that I don’t need to know Ruby, the  language Puppet is written in, to figure out how or what its going to  do.  The configuration file itself tells me everything I need to know.   When you write your own script, the time it takes to do this may not be  warranted.  So at the very least, make sure that you have comments that  tell people where to look for the output based on these configurations  or what the configurations mean in the file.

Try  to keep everyone with the sharp skills needed so they are ready to  slice through problems as they arise. This also means internal training.   One of the things we have participated in on a regular basis is a  short one hour refresher put on by the subject matter experts(SME) for  each of the technologies we use.  Doing this accomplishes a few  different things at once.  It helps the SME keep their documentation  current.  It gives the SME an opportunity to share changes they want to  make or have made in the environment.  Then it gives everyone supporting  the environment a chance to ask questions about the technology when  there is no pressure.  When possible, annual reviews of each area that a  team supports, goes a long way towards elevating the teams ability to  be as productive as possible.

While  you can never completely prevent brain drain when a team member leaves,  the steps above, if done correctly, can go a long way.  Having been the  person transitioned to more than once, the better these steps are  followed, the better we have felt about taking on the responsibility.   Another side effect of these approaches and others along the same  thought process is that it allows people to migrate from one SME area to  another.  This helps people stay fresh and keeps them from becoming  bored and complaisant.  The more driven your team is to solve businesses  problems, the more profitable you will be.

Eps. 18 – Managing large number of Linux Systems with Automation

Running Time: 42:21

1) Introduction

2) News

Free Linux Training at the Linux Foundation:

https://linuxinstall.net/linux_news/2010/1/26/free-linux-training-webinars.html

Is the addition of Proprietary software in Ubuntu gonig to help or hurt?

http://linuxinstall .net/linux_news /2010/1/21/will-the-addition-of-proprietary-software-in-distros-hurt-fr.html

If Linux just did Blank it would be ready for Prime Time?

https://linuxinstall.net/linux_news/2010/1/25/if-linux-just-did-blank-it-would-be-ready-for-prime-time.html

3) Managing Large Numbers of Linux Systems

4) Conclusion

E-Mail us at podcast@linuxinstall.net

Go to the WebSite to call us via Google Voice

Follow us on Twitter @linuxinstall

Follow us on Indenti.ca as linuxinstall or http://identi.ca/linuxinstall

Look for us and comment on iTunes, odeo