Many years ago (something like 6), there was a distinct problem in the Puppet Server in that any local Ruby code that was loaded in a given Puppet Environment would potentially be overwritten by Ruby code from another Puppet Environment
Fast forward to last week and this issue, while heavily mitigated by the latest release of the Puppet Server, still exists!
Update Fixed the code to work around JRuby issues!
At first, I was quite irritated to find that this, long running issue was still plaguing us. However, after speaking with several of the Puppet engineers, I realized that they’ve actually done all that they can!
The fundamental issue is that Ruby modules and classes that are loaded via external files, generally the puppet_x space, are just pure Ruby. I have no idea why this took so long to sink in but it was realizing that the Puppet functions, types, and providers are safe entities that the Puppet ecosystem itself can wrap and control.
For instance, a function looks something like the following:
So, looking at this, Puppet has complete control over the creation of the function, its naming and its namespace. This means that the Puppet team can (and did) take care of this on their own.
But…unfortunately, this doesn’t extend to the Ruby side itself.
For instance, let’s revisit our example with some additional content:
Over in ext_stuff.rb we might have something like:
You can see here, that the inclusion of the external Ruby code is completely unbounded and uncontrollable by the Puppet framework itself. So, what do we do? Well, one solution would be to never use an external module or class again. While this would certainly work, I think that everyone would pretty much run screaming at the sight of your code.
Metaprogramming to the Rescue!
I honestly never thought that I’d be happy to dredge up old memories of Ruby metaprogramming but, in this case, it seems to be the right answer to the problem.
Now, I won’t say that this is the most elegant solution, but hopefully someone out there will come up with something better at some point!
The key is to embed the name of the Puppet Environment into the module namespace of your Ruby code. While this does create a set of objects per Puppet Environment, they will eventually get cleaned up with the JRuby pool flushes and you should be able to safely reduce your number of compile masters by combining multiple environments onto the same server.
To effect our solution, you’ll first want to tackle your external dependency. I highly recommend having a working set of spec tests prior to starting this conversion!
Converting the External Class
In our sample case here, we’ll start with the ext_stuff.rb file:
Is this strange? You bet! But, if you have three environments, dev, test, and production, this would create The following corresponding Ruby modules:
This ensures that, for a given environment, there is no longer any bleed over between the various classes!
Using the Dynamically Created Module
This isn’t the end of the story though, we now have to use this monstrosity that we’ve created.
To do this, you’ll need to pay attention to two things. First, you need to use load instead of include when loading the class. This is so that the loader doesn’t think that it already has the class and will, instead, be sure to re-read the file, every time it is called.
Yes, this does create a small performance penalty but I’ll take it over paying for an entirely new server!
The second is that, as you may have guessed, you’ll need to call the dynamic name of the class to actually use it.
Both of these techniques are demonstrated in the code snippet below:
While a bit irritating to use, this seems to effectively mitigate the multi-tenant issue that currently plagues the external code side of the Puppet environment.
If, like me, you are a Forge module author, please consider adopting this technique to ensure that your Puppet modules are environment safe for our users!
We’ll be porting the SIMP modules as quickly as we can and if anyone ends up with a better or more elegant approach to the problem that doesn’t create global naming conflicts, please send them my way!
Trevor has worked in a variety of IT fields over the last
decade, including systems engineering, operating system
automation, security engineering, and various levels of
At OP his responsibilities include maintaining overall
technical oversight for Onyx Point solutions, providing
technical leadership and mentorship to the DevOps teams. He is
also responsible for leading OP’s solutions and intellectual
property development efforts, setting the technical focus of
the company, and managing OP products and related services. In
this regard, he oversees product development and delivery as
well as developing the strategic roadmap for OP’s product line.
At Onyx Point, our engineers focus on Security, System
Administration, Automation, Dataflow, and DevOps consulting for
government and commercial clients. We offer professional
services for Puppet, RedHat, SIMP, NiFi, GitLab, and the other
solutions in place that keep your systems running securely and
efficiently. We offer Open Source Software support and
Engineering and Consulting services through GSA IT Schedule 70.
As Open Source contributors and advocates, we encourage the use
of FOSS products in Government as part of an overarching IT
Efficiencies plan to reduce ongoing IT expenditures attributed
to software licensing. Our support and contributions to Open
Source, are just one of our many guiding principles
Security in All We Do.
Pursue Innovation with Integrity.
Communicate Openly and Respectfully.
Offer Your Talents, and Appreciate the Talents of Others