So, it’s been a few weeks since I’ve been able to post mainly due to getting life back in order after being out for a week at PuppetConf 2013.
This was an enjoyable conference with around 1700 people from around the world showing up to share ideas and scalable systems management techniques.
Developer Day was a good experience overall however I’m hoping that there is more direction next year so that we can accomplish more during the time allotted. We spent our time at the Puppet Core table and worked on fixing the lack of group membership management. I’m happy to say that we did manage to get this working, and verified that it would work with LDAP groups. Unfortunately, this was pretty much at the very last minute so we had to push without formal tests.
The bug is #19414 if you are interested in following along. I’m hoping to have some additional time in the near future to finish this up properly.
Most of the conference sessions centered around how to manage systems at scales of hundreds, if not thousands, of nodes. There was a great deal of discussion surrounding the concept of offline catalog application as a means for scaling. This technique involves compiling offline catalogs such that nodes can pick up the compiled catalog as necessary and apply them without asking the Puppet server each time. It is definitely true that you can cover a great deal more surface area this way and, with proper log collection and analysis techniques, you may be able to scale much further than by calling in to a master each time.
I do have some misgivings regarding this technique. Namely, how exactly do you bootstrap your environment in such a way that the information is both secure and able to be easily procured by hosts over time while minimizing bandwidth usage. I have some ideas about combining Puppet with rsync to more efficiently handle this situation and will post if my theories prove out.
Second to the pre-compiled catalog sessions, were discussions about MCollective. MCollective definitely seems to have beaten out the orchestration competition thus far and there were some interesting discussions surrounding more effective system security by disabling SSH until updated by MCollective to allow users to login. I am a bit curious as to how Ansible will hold up over time but am leery of the 0MQ usage since I’m not a huge fan of not knowing exactly what my environment is doing at all given times.
MCollective was, of course, tied back into the scaling discussion by noting that you should have absolute levels of scale given the way it works over AMQP. I still think that you may end up with some issues related to obtaining server status messages in a reasonable amount of time but I’m sure that can be overcome with some creative programming.
Unfortunately, not a future that cool (prints at the link but alas no affiliation).
The presentations by Henrik Lindberg and Dawn Foster regarding the future of the Puppet language and the future of the Puppet community were both very worthwhile. Mr. Lindberg described the Puppet future parser and how it now provides for both loops and Unicorns! Ok, so no Unicorns, but there are Lambdas which are possibly almost as interesting…maybe. So, lambdas enable loops and, as a side effect, you can now pass lambdas to your own custom functions. I’m certainly looking forward to wreaking havoc with this!
Ms. Foster regaled us with tales of the growth of the Puppet community and how they’re making a concerted effort to get back to community contributors in a timely manner. As someone who waited two years for a patch to go in, this is really exciting news and I have been encouraged by the level of helpful response that I’ve already gotten to my latest submissions.
There were also discussions about Puppet User’s Groups, or PUGs. If I can dig up some interest in the Columbia or Baltimore area, perhaps we’ll get one started. As a start, I might just dovetail off of the Ubuntu User’s Group at the Community College of Baltimore County (CCBC). If you’re interested, give us a shout.
Moustaches and Logs!
So LogStash is an excellent log multiplexer and filter utility. I had been using it for a while but got a look at some of the new features as presented by Jordan Sissel. Of note, the built in LogStash web interface is being deprecated in favor of simply using Kibana to interface with ElasticSearch. One question, that I haven’t found a satisfactory answer for, is how to effectively restrict Kibana 3 log access between different users. This becomes especially important if you are mixing log data from different customers but only want particular customers to access their own data. Obviously, you can tag the data via LogStash on ingest, but how do you effectively ensure that the users can only retrieve data with those tags.
I’m thinking that a custom proxy will need to be developed to effect this level of separation and I’m hoping that somebody beats me to the punch.
Sessions of Note
As a final note, I’ll leave you with this list of conference sessions that I would recommend watching. Thanks for reading and I hope we’ll see some of you at Lisa ’13 or PuppetConf 2014.
[Loops and Unicorns – The Future of the Puppet Language
](https://puppetlabs.com/presentations/loops-and-unicorns-future-puppet-language) More Log Stash Awesome
[Intro to Systems Orchestration with MCollective
](http://puppetlabs.com/presentations/intro-systems-orchestration-mcollective) [Keynote: Why Did We Think Large Scale Distributed Systems Would be Easy?
](http://puppetlabs.com/presentations/keynote-why-did-we-think-large-scale-distributed-systems-would-be-easy) [Keynote: VMware vCHS, Puppet, and Project Zombie
](http://puppetlabs.com/presentations/keynote-vmware-vchs-puppet-and-project-zombie) Razor: A Fresh Look at Provisioning
But don’t take my word for it, you can find them all at the PuppetConf 2013 Videos page.
Next Time….maybe not Puppet…
Trevor has worked in a variety of IT fields over the last
decade, including systems engineering, operating system
automation, security engineering, and various levels of
At OP his responsibilities include maintaining overall
technical oversight for Onyx Point solutions, providing
technical leadership and mentorship to the DevOps teams. He is
also responsible for leading OP’s solutions and intellectual
property development efforts, setting the technical focus of
the company, and managing OP products and related services. In
this regard, he oversees product development and delivery as
well as developing the strategic roadmap for OP’s product line.
At Onyx Point, our engineers focus on Security, System
Administration, Automation, Dataflow, and DevOps consulting for
government and commercial clients. We offer professional
services for Puppet, RedHat, SIMP, NiFi, GitLab, and the other
solutions in place that keep your systems running securely and
efficiently. We offer Open Source Software support and
Engineering and Consulting services through GSA IT Schedule 70.
As Open Source contributors and advocates, we encourage the use
of FOSS products in Government as part of an overarching IT
Efficiencies plan to reduce ongoing IT expenditures attributed
to software licensing. Our support and contributions to Open
Source, are just one of our many guiding principles
- Customer First.
- Security in All We Do.
- Pursue Innovation with Integrity.
- Communicate Openly and Respectfully.
- Offer Your Talents, and Appreciate the Talents of Others