“Kama is the enjoyment of appropriate objects by the five senses of hearing, feeling, seeing Kama Sutra. Step by step Packt Publishing Puppet Reporting and. Extending Puppet Second Edition - [Free] Extending Puppet Second Edition [PDF ] [EPUB] This book provides you with the knowledge you. A clear, updated, practical, and focused view of the current state of the technology and the evolution of Puppet is what we need to tackle our IT.
|Language:||English, Spanish, French|
|Genre:||Children & Youth|
|Distribution:||Free* [*Sign up for free]|
Get Free Read & Download Files Extending Puppet PDF. EXTENDING PUPPET. Download: Extending Puppet. EXTENDING PUPPET - In this site isn`t the. extending puppet in pdf format, then you've come to loyal site. we furnish utter edition of this pdf, djvu, epub, doc formats. extending puppet second edition pdf . approaches to extending puppet second edition pdf - theviralthread alessandro franceschi extending puppet in pdf format, in that case you come on to the.
He has attended several PuppetConf and PuppetCamps as both speaker and participant, always enjoying the vibrant and friendly community, each time learning something new.
Over the years, he started to publish his Puppet code, trying to make it reusable in different scenarios.
The result of this work is the example42 Puppet modules and control repo, complete, feature rich, sample Puppet environment.
You can read about example42 at www. You can follow Franceschi on his Twitter account at alvagante. Jaime Soriano Pastor was born in Teruel, a small city in Spain. He has always been a passionate about technology and sciences. Get to grips with Hiera and learn how to install and configure it, before learning best practices for writing reusable and maintainable code. You will also be able to explore the latest features of Puppet 4, before executing, testing, and deploying Puppet across your systems.
As you progress, Extending Puppet takes you through higher abstraction modules, along with tips for effective code workflow management. Finally, you will learn how to develop plugins for Puppet - as well as some useful techniques that can help you to avoid common errors and overcome everyday challenges.
What you will learn Learn the principles of Puppet language and ecosystem Extract the features of Hiera and PuppetDB's power usage Explore the different approaches to Puppet architecture design Use Puppet to manage network, cloud, and virtualization devices Manage and test the Puppet code workflow Tweak, hack, and adapt the Puppet extension points Get a run through of the strategies and patterns to introduce Puppet automation Master the art of writing reusable modules About the Author Alessandro Franceschi is a long time Puppet user, trainer, and consultant.
He started using Puppet in , automating a remarkable amount of customers' infrastructures of different sizes, natures, and complexities. He has attended several PuppetConf and PuppetCamps as both speaker and participant, always enjoying the vibrant and friendly community, each time learning something new. Most of the time, if you get the following error when running you client warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run it is because of some invalid YAML output from your external node script.
Check http: Sometimes these errors can be a little cryptic. Below is a list of common errors and their explanations that should help you trouble-shoot your manifests. In this example, the colon: A variant looks like: You can also get the same error if you forget a comma. For instance, in this example the comma is missing at the end of line 3: However, since this is a reference to a class and a define, the define also needs to have a capital letter, so Classname:: Duplicate definition: In this contrived example, the system entry has been defined twice, so one of them needs removing.
What most often happens, that the same resource is exported by two nodes. Node inheritance is currently only really useful for inheriting static or self-contained classes, and is as a result of quite limited value.
A workaround is to define classes for your node types - essentially include classes rather than inheriting them. Class Inheritance and Variable Scope The following would also not work as generally expected: To avoid the duplication of the template filename, it is better to sidestep the problem altogether with a define: Qualified variables might provoke alternate methods of solving this issue.
You can use qualified methods like: Could not retrieve catalog: The pluginsync feature will then synchronise the files and the new code will be loaded when both daemons are restarted.
Dashboard can be used as an external node classification tool.
Here you can learn how to install Dashboard. Obtain the source: Configure the database: Start the server: Import Reports optional: As a Rails application, Puppet Dashboard can be deployed in any server configuration that Rails supports.
Instructions for deployment via Phusion Passenger coming soon. Other databases coming soon. External Node Tool Puppet Dashboard functions as an external node tool. All nodes make a puppet-compatible YAML specification available for export. See the instructions here for more information about external nodes. About Puppet Dashboard is fairly self explanatory, if you have set it up using the Installation Instructions just visit port to start using it. Puppet supports templates and templating via ERB, which is part of the Ruby standard library and is used for many other projects including Ruby on Rails.
Templates allow you to manage the content of template files, for example configuration files that cannot yet be managed directly by a built-in Puppet type.
This might include an Apache configuration file, Samba configuration file, etc. Evaluating templates Templates are evaluated via a simple function: Best practices indicates including the template in the templates directory inside your Module. Templates are always evaluated by the parser, not by the client. This means that if you are using puppetmasterd, then the templates only need to be on the server, and you never need to download them to the client. This also means that any client-specific variables facts are learned first by puppetmasterd during the client start-up phase, then those variables are available for substitution within templates.
Using templates Here is an example for generating the Apache configuration for Trac sites: If the variable you are accessing is an array, you can iterate over it in a loop. Given Puppet manifest code like this: Some stuff with val1 Some stuff with val2 Some stuff with otherval Note that normally, ERB template lines that just have code on them would get translated into blank lines. This is because ERB generates newlines by default.
Conditionals The ERB templating supports conditionals. The following construct is a quick and easy way to conditionally put content into a file: This snippet will print all the tags defined in the current scope: However, resources can be specified in a way that marks them as virtual, meaning that they will not be sent to the client by default. You mark a resource as virtual by prefixing to the resource specification; for instance, the following code defines a virtual user: How This Is Useful Puppet enforces configuration normalization, meaning that a given resource can only be specified in one part of your configuration.
For most cases, this is fine, because most resources are distinctly related to a single Puppet class — they belong in the webserver class, mailserver class, or whatever. Some resources can not be cleanly tied to a specific class, though; multiple otherwise-unrelated classes might need a specific resource.
For instance, if you have a user who is both a database administrator and a Unix sysadmin, you want the user installed on all machines that have either database administrators or Unix administrators. In these cases, you can specify the user as a virtual resource, and then mark the user as real in both classes.
Thus, the user is still specified in only one part of your configuration, but multiple parts of your configuration verify that the user will be installed on the client. How to Realize Resources There are two ways to mark a virtual resource so that it gets sent to the client: You can use a special syntax called a collection, or you can use the simple function realize.
Collections provide a simple syntax for marking virtual objects as real, such that they should be sent to the client. Collections require the type of resource you are collecting and zero or more attribute comparisons to specifically select resources. For instance, to find our mythical user, we would use: This is somewhat of an inconsistency in Puppet, because this value is often referred to as the name, but many types have a name parameter and they could have both a title and a name.
If no comparisons are specified, all virtual resources of that type will be marked real. This attribute querying syntax is currently very simple. You can also parenthesize these statements, as you might expect. So, a more complicated collection might look like: Virtual Define-Based Resources Since version 0.
Puppet provides an experimental superset of virtual resources, using a similar syntax. About Exported Resources While virtual resources can only be collected by the host that specified them, exported resources can be collected by any host. You must set the storeconfigs configuration parameter to true to enable this functionality you can see information about stored configuration on the Using Stored Configuration wiki page , and Puppet will automatically create a database for storing configurations using Ruby on Rails.
Here is an example with exported resources: If hostB exports a resource but hostB has never connected to the server, then no host will get that exported resource. Note that the tag is not required, it just allows you to control which resources you want to import. These types become very powerful when you export and collect them. For example, you could create a class for something like Apache that adds a service definition on your Nagios host, automatically monitoring the web server: This feature is not constrained to the override in inherited context, as is the case in the usual resource override.
Ordinary resource collections can now be defined by filter conditions, in the same way as collections of virtual or exported resources. In the above example the condition is empty, so all file resources not just virtual ones are selected, and all file resources will have their modes overridden to It now collects all matching resources, virtual or no, and allows you to override parameters in any of the collection so defined.
As another example, one can write: Moreover, it is now possible to define resource overriding without respecting the override on inheritance rule: Using Multiple Environments As of 0. The idea behind these environments is to provide an easy mechanism for managing machines at different levels of SLA — some machines need to be up constantly and thus cannot tolerate disruptions and usually use older software, while other machines are more up to date and are used for testing upgrades to more important machines.
Puppet allows you to define whatever environments you want, but it is recommended that you stick to production, testing, and development for community consistency. Puppet defaults to not using an environment, and if you do not set one on either the client or server, then it will behave as though environments do not exist at all, so you can safely ignore this feature if you do not need it.
Please note: For a more detailed discussion have a look at: Goal of Environments The main goal of a set-up split by environments could be that puppet can have different sources for modules and manifests for different environments on the same Puppet master.
For example, you could have a stable and a testing branch of your manifests and modules. You could then test changes to your configuration in your testing environment without impacting nodes in your production environment. You could also use environments to deploy infrastructure to different segments of your network, for example a dmz environment and a core environment. You could also use environments to specify different physical locations, Using Environments on the Puppet Master The point of the environment is to choose which manifests, templates, and files are sent to the client.
Thus, Puppet must be configured to provide environment-specific sources for this information. Puppet environments are implemented rather simply: These per-environment sections are then used in preference to the main sections. For instance: Running with any other environment or without an environment would default to the site. Those parameters are: Where to look for modules. Where to look for templates. The modulepath should be preferred to this setting, but it allows you to have different versions of a given template in each environment.
Which file to use as the main entry point for the configuration. The Puppet parser looks for other files to compile in the same directory as this manifest, so this parameter also determines where other per-environment Puppet manifests should be stored. With a separate module path, it should be easy to use the same simple manifest in all environments. It is recommended that you switch as much as possible to modules if you plan on using environments.
Additionally, the file server uses an environment-specific module path; if you do your file serving from modules, instead of separately mounted directories, your clients will be able to get environment-specific files.
You can also specify this on the command line: Puppet Search Path When determining what configuration to apply, Puppet uses a simple search path for picking which value to use: Although you may put plugins such as types or facts into modules, they will still be referenced by the default modulepath, and not by the modulepath of the client environment. If you want your plugins and facts to be part of your environment, one workaround is to create stub modules called plugins and facts in your environment modulepath and place your desired plugins and facts inside the files subdirectory of these stub modules.
Then when your client requests: This allows your facts to differ depending upon your environment. Because the Transaction interals of Puppet are responsible for creating and sending the reports, these are called transaction reports.
Currently, these reports include all of the log messages generated during the configuration run, along with some basic metrics of what happened on that run. In Rowlf, more detailed reporting information will be available, allowing users to see detailed change information regarding what happened on nodes. Logs The bulk of the report is every log message generated during the transaction. This is a simple way to send almost all client logs to the Puppet server; you can use the log report to send all of these client logs to syslog on the server.
Metrics The rest of the report contains some basic metrics describing what happened in the transaction. There are three types of metrics in each report, and each type of metric has one or more values: Keeps track of how long things took.
Keeps track of the following stats: The total number of changes in the transaction. Clients default to sending reports to the same server they get their configurations from, but you can change that by setting reportserver on the client, so if you have load-balanced Puppet servers you can keep all of your reports consolidated on a single machine.
Sending Reports In order to turn on reporting on the client-side puppetd , the report argument must be given to the puppetd executable either by passing the argument to the executable on the command line, like this: If you are using namespaceauth. There are other reports types available that can process each report as it arrives, or you can write a separate processor that handles the reports on your own schedule.
Using Builtin Reports As with the rest of Puppet, you can configure the server to use different reports with either command-line arguments or configuration file changes. The value you need to change is called reports, and it must be a comma-separated list of the reports you want to use. You can also specify none if you want the reports to just be thrown away. Writing Custom Reports You can easily write your own report processor in place of any of the built-in reports.
This is only necessary on the server, as the report reciever does not run on the clients. Using External Report Processors Many people are only using the store report and writing an external report processor that processes many reports at once and produces summary pages.
This is easiest if these processors are written in Ruby, since you can just read the YAML files in and de-serialize them into Ruby objects. Then, you can just do whatever you need with the report objects.
It is automatically generated from the reports available in Puppet, and includes documentation on how to use each report. External nodes allow you to store your node definitions in an external data source.
For example, a database or other similar repository. This allows you to: A subtle advantage of using a external nodes tool is that parameters assigned to nodes in a an external node tool are set a top scope not in the scope created by the node assignment in language.
This leaves you free to set default parameters for a base node assignment and define whatever inheritance model you wish for parameters set in the children.
In the end, Puppet accepts a list of parameters for the node and those parameters when set using an External Node tool are set at top scope. How to use External Nodes To use an external node classifier, in addition to or rather than having to define a node entry for each of your hosts, you need to create a script that can take a hostname as an argument and return information about that host for puppet to use.
You can use node entries in your manifests together with External nodes. You cannot however use external nodes and LDAP nodes together. You must use one of the two types. Those classes can be in hierarchies however, so inheritance is available.
In both versions, after outputting the information about the node, you should exit with code 0 to indicate success, if you want a node to not be recognized, and to be treated as though it was not included in the configuration, your script should exit with a non-zero exit code. External node scripts for version 0.
The classes value is an array of classes to include for the node, and the parameters value is a hash of variables to define. This file can be queried for fact values. This example will produce results basically equivalent to this node entry: In both versions, the script should exit with code 0 after producing the desired output. Exit with a non-zero exit code if you want the node to be treated as though it was not found in the configuration.
External node scripts for versions before 0. Configuring puppetmasterd 96! Are you using the default webserver? Switching to a more efficient web server implementation such as Passenger or Mongrel will allow for serving many more nodes concurrently from the same server.
This performance tweak will offer the most immediate benefits. If your system can work with Passenger, that is currently the recommended route. On older systems, use Mongrel. Managed nodes can be configured to not check in automatically every 30 minutes, but rather to check in only when requested. No central host Using a central server offers numerous advantages, particularly in the area of security and enhanced control. In environments that do need these features, it is possible to rsync or otherwise transfer puppet manifests and data to each individual node, and then run puppet locally, for instance, from cron.
This approach scales essentially infinitely, and full usage of Puppet and facter is still possible. For small directories, however, there is no problem in using it.
This will result in performance improvements on both the client and server. This guide shows how to set it up. Supported Versions Passenger support is present in release 0. For earlier versions, consider Using Mongrel.
This may work well for you, but a few people feel like using a proven web server like Apache would be superior for this purpose. What is Passenger? While it should be compatible with every Rack application server, it has only been tested with Passenger. Depending on your operating system, the versions of Puppet, Apache and Passenger may not support this implementation. Specifically, Ubuntu Hardy ships with an older version of puppet 0. There are also some passenger packages there, but as of they do not seem to have the latest passenger 2.
Passenger versions 2. So use either 2. Note that while it was expected that Passenger 2. So, passenger 2. Installation Instructions for Puppet 0. Whatever you do, make sure your config. Passenger will setuid to that user. Or, you could just add the correct versions to your gem command: Therefore, config. Apache Configuration for Puppet 0.
The config. Currently not configurable. The shorting this option allows for puppetmasterd to get refreshed at some interval. This option is also somewhat dependent upon the amount of puppetd nodes connecting and at what interval. This will allow idle puppetmasterd to get recycled.
The net effect is less memory will be used, not more. The mongrel documentation is currently maintained our our Wiki until it can be migrated over. Please see the OS specific setup documents on the Wiki for further information. This is the parameter that gets assigned when a string is provided before the colon in a type declaration.
In general, only developers will need to worry about which parameter is the namevar. In the following code: Parameters Determine the specific configuration of the instance. They either directly modify the system internally, these are called properties or they affect how the instance behaves e.
Providers Provide low-level functionality for a given resource type. This is usually in the form of calling out to external commands. When required binaries are specified for providers, fully qualifed paths indicate that the binary must exist at that specific path and unqualified binaries indicate that Puppet will search for the binary using the shell path. Features The abilities that some providers might not support. You can use the list of supported features to determine how a given provider can be used.
Resource types define features they can use, and providers can be tested to see which features they provide.
A solution can be achieved by adding a new fact to Facter. These additional facts can then be distributed to Puppet clients and are available for use in manifests.
The Concept You can add new facts by writing a snippet of Ruby code on the Puppet master.
We then use Plugins In Modules to distribute our facts to the client. To do these we create a fact. We then use the instructions in Plugins In Modules page to copy our new fact to a module and distribute it.
During your next Puppet run the value of our new fact will be available to use in your manifests. The best place to get ideas about how to write your own custom facts is to look at the existing Facter fact code.
You will find lots of examples of how to interpret different types of system data and return useful facts. You may not be able to view your custom fact when running facter on the client node. The former will return nil for unknown facts, the latter will raise an exception.
An example: To still test your custom puppet facts, which are usually only loaded by puppetd, there is a small hack: You can then run facter, and it will import your code: Hence, you should now see the following when running puppetd: Retrieving facts info: It is important to note that to use the facts on your clients you will still need to distribute them using the Plugins In Modules method.
Linux kernelrelease: On older versions of Puppet, prior to 0. Puppet would look for custom facts on puppet: This would enable the syncing of these files to the local file system and loading them within puppetd. Some additional options were avaialble to configure this legacy method: The following command line or config file options are available default options shown: Where Puppet should look for facts.
Multiple directories should be colon-separated, like normal PATH variables. By default, this is set to the same value as factdest, but you can have multiple fact locations e.
Where Puppet should store facts that it pulls down from the central server. From where to retrieve facts. The standard Puppet file type is used for retrieval, so anything that is a valid file source can be used here. Whether facts should be synced with the central server. What files to ignore when pulling down facts. Remember the approach described above for factsync is now deprecated and replaced by the plugin approach described in the Plugins In Modules page.
While Puppet does not require Ruby experience to use, extending Puppet with new Puppet types and providers does require some knowledge of the Ruby programming language, as is the case with new functions and facts.
The resource types provide the model for what you can do; they define what parameters are present, handle input validation, and they determine what features a provider can or should provide. The providers implement support for that type by translating calls in the resource type to operations on the system. The libdir is special because you can use the pluginsync system to copy all of your plugins from the fileserver to all of your clients and seperate Puppetmasters, if they exist.
The first thing you have to figure out is what properties the resource has. After adding properties, Then you need to add any other necessary parameters, which can affect how the resource behaves but do not directly manage the resource itself. Parameters handle things like whether to recurse when managing files or where to look for service init scripts.
You may remember that things like require are metaparameters. Types are created by calling the newtype method on Puppet:: Type, with the name of the type as the only required argument. You can optionally specify a parent class; otherwise, Puppet:: Type is used as the parent class. You must also provide a block of code used to define the type: Blocks are a very powerful feature of Ruby and are not surfaced in most programming languages. A normal type will define multiple properties and possibly some parameters.
We have already mentioned Puppet provides a libdir setting where you can copy the files outside the Ruby search path. See also Plugins In Modules All types should also provide inline documention in the doc class instance variable. The text format is in Restructured Text.
If you define a property named owner, then when you are retrieving the state of your resource, then the owner property will call the owner method on the provider. You can set this property up on your resource type just by calling the ensurable method in your type definition: The last method, somewhat obviously, is a boolean to determine if the resource current exists.
You can modify how ensure behaves, such as by adding other valid values and determining what methods get called as a result; see existing types like package for examples. The rest of the properties are defined a lot like you define the types, with the newproperty method, which should be called on the type: When Puppet was first developed, there would normally be a lot of code in this property definition.
Now, however, you normally only define valid values or set up validation and munging. If you specify valid values, then Puppet will only accept those values, and it will automatically handle accepting either strings or symbols. In most cases, you only define allowed values for ensure, but it works for other properties, too: For most properties, though, it is sufficient to set up validation: Puppet keeps track of the definition order, and it always checks and fixes properties in the order they are defined.
If, instead, the property should only be in sync if all values match the current value e. See current types for examples. Handling Property Values Handling values set on properties is currently somewhat confusing, and will hopefully be fixed in the future.
When a resource is created with a list of desired values, those values are stored in each property in its should instance variable. Like ensure, one parameter you will always want to define is the one used for naming the resource. This is nearly always called name: If your parameter has a fixed list of valid values, you can declare them all at once: For instance, given the following definition: Validation and Munging If your parameter does not have a defined list of values, or you need to convert the values in some way, you can use the validate and munge hooks: The default munge method converts any values that are specifically allowed into symbols.
They have no role to play at all during use of a given value, only during assignment. You use the autorequire hook, which requires a resource type as an argument, and your code should return a list of resource names that your resource could be related to: Because the properties call getter and setter methods on the providers, except in the case of ensure, the providers must define getters and setters for each property.
Provider Features A recent development in Puppet around 0. Additionally, individual properties and parameters in the type can declare that they require one or more specific features, and Puppet will throw an error if those prameters are used with providers missing those features: The only option currently supported is specifying one or more methods that must be defined on the provider.
If no methods are specified, then the provider needs to specifically declare that it has that feature: When you define features on your type, Puppet automatically defines a bunch of class methods on the provider: Passed a feature name, will return true if the feature is available or false otherwise. Returns a list of all supported features on the provider. Passed a list of feature, will return true if they are all available, false otherwise. Additionally, each feature gets a separate boolean method, so the above example would result in a paint?
See Custom Types and Provider Development for more information on the individual classes. You can see how this would be extensible to handle one of your own ideas: In addition to the docs and the provider name, we provide the three methods that the ensure property requires. For more about blocks, see the Ruby language documentation. This should always be true of how providers are implemented. Also notice that the ensure property, when created by the ensurable method, behaves differently because it uses methods for creation and destruction of the file, whereas normal properties use getter and setter methods.
When a resource is being created, Puppet expects the create method or, actually, any changes done within ensure to make any other necessary changes. You can see how the absent and present values are defined by looking in the property. For instance, there are more than 20 package providers, including providers for package formats like dpkg and rpm along with high-level package managers like apt and yum. Not all resource types have or need providers, but any resource type concerned about portability will likely need them.
We will use the apt and dpkg package providers as examples throughout this document, and the examples used are current as of 0. When declarating a provider, you can specify a parent class — for instance, all package providers have a common parent class: Package do desc " Providers can also specify another provider from the same resource type as their parent: Puppet defaults to creating a new source for each provider type, so you have to specify when a provider subclass shares a source with its parent class.
Puppet providers include some helpful class-level methods you can use to both document and declare how to determine whether a given provider is suitable. The primary method is commands, which actually does two things for you: It declares that this provider requires the named binary, and it sets up class and instance methods with the name provided that call the specified binary. The binary can be fully qualified, in which case that specific path is required, or it can be unqualified, in which case Puppet will find the binary in the shell path and use that.
If the binary cannot be found, then the provider is considered unsuitable. For example, here is the header for the dpkg provider as of 0. For file extistence, truth, or false, just call the confine class method with exists, true, or false as the name of the test and your test as the value: To test Facter values, just use the name of the fact: Toward this end, Puppet does what it can to choose an appropriate default provider for each resource type.
This is generally done by a single provider declaring that it is the default for a given set of facts, using the defaultfor class method.
At this point, however, there is a default interface between the resource type and the provider. This interface consists entirely of getter and setter methods. When the resource is retrieving its current state, it iterates across all of its properties and calls the getter method on the provider for that property.
For instance, when a user resource is having its state retrieved and its uid and shell properties are being managed, then the resource will call uid and shell on the provider to figure out what the current state of each of those properties is. This method call is in the retrieve method in Puppet:: When a resource is being modified, it calls the equivalent setter method for each property on the provider. The transaction is responsible for storing these returned values and deciding which value to actually send, and it does its work through a PropertyChange instance.
It calls sync on each of the properties, which in turn just call the setter by default. You can override that interface as necessary for your resource type, but in the hopefully-near future this API will become more solidified. Note that all providers must define an instances class method that returns a list of provider instances, one for each existing instance of that provider.
For instance, the dpkg provider should return a provider instance for every package in the dpkg database.
For simple cases, this is sufficient — you just implement the code that does the work for that property. However, because things are rarely so simple, Puppet attempts to help in a few ways. Prefetching First, Puppet transactions will prefetch provider information by calling prefetch on each used provider type.
The prefetch method then tries to find any matching resources, and assigns the retrieved providers to found resources. Note that it also means that providers are often getting replaced, so you cannot maintain state in a provider. Flushing Many providers model files or parts of files, so it makes sense to save all of the writes up and do them in one run.
Providers in need of this functionality can define a flush instance method to do this.