2020欧洲杯外围APP

Even though IT operations now take advantage of highly adaptable platforms for their IT infrastructures, they struggle more than ever with environment drift. It’s easy for development teams to spin up new virtual servers and systems, but this just means that there is more “stuff” to maintain. Configurations of various elements of the infrastructure will gradually wind up with hand-tweaked variants that can lead to downtime when equipment is swapped out or capacity is expanded by loading new server instances.

The promise of infrastructure automation, also called IT automation or infrastructure as code2020欧洲杯外围APP, is that it lets operations teams define what the system should really look like and then let automated processes bring (and keep) all the systems in that exact architecture and configuration. A part of this process is automated server provisioning, a concept and practice that has been with us for some while. Likewise, we include network automation, where the components of network architectures can be remotely interrogated and then configured.

Infrastructure automation is the umbrella term for all of this, for every aspect of keeping an enterprise IT platform

Two faces of automation

The industry has settled (somewhat) on the idea that there are two categories that need to be dealt with in creating a full IT platform. First, the actual hardware that pushes the packets around needs to be configured, and this is what is generally called infrastructure automation, including switches, firewalls, data storage systems, and “raw” servers.

2020欧洲杯外围APPSecond, servers need to be loaded with whatever software they’ll be running for the businesses that own them. In the cloud world, this tends to be called provisioning.

The terminology may not be fully resolved, but this doesn’t much matter because the two processes—automation and provisioning–are intertwined. Furthermore, the same toolset may (and probably should) be used to accomplish both provisioning and infrastructure automation.

How infrastructure automation works

In an organization that has committed itself to automation, nothing new happens to the infrastructure or production software unless either the process of getting the change implemented is described using a technical language (this is called an imperative approach) or else a similar technical markup is created to describe exactly what the new desired state actually looks like (this is a ).

2020欧洲杯外围APPWithout belaboring the point, the declarative approach is the better one, for a simple reason. To use a non-networking example, If you have a set of imperative instructions that say the heat should be turned up ten degrees, then running that instruction set twice will raise the temperature twenty degrees. This is not usually the result that’s actually desired.

If, on the other hand, you take a declarative approach and say that the temperature should be 80 degrees Fahrenheit, then the system can evaluate what the current temperature is and react accordingly. If the current temperature is 75 degrees, what you actually want is not a ten-degree increase, but a five-degree warmup. To take it even a step further, If you wanted the temperature to maintain a constant 80 degrees, then you could use an event-based automation approach to detect temperature deviations and provide automated actions that allow the system to put itself back to 80 degrees.

Factors driving IT automation

2020欧洲杯外围APPThis idea that multiple runs of a script or configuration process should leave the same results is called . The key idea is that you wind up with the same system state whenever the process is invoked.

You may be thinking that the declarative approach—especially an event-based one—sounds more complicated, but that’s the point of doing it in an automated way using a tool. You describe what your (IT infrastructure) paradise looks like, then let the tool periodically check, create, and maintain that paradisiacal state as needed.

Infrastructure Automation Tools

As you’d expect, SaltStack is highly partial to both the enterprise SaltStack products and the open-source Salt tool on which calgarydads.commercial products are based. We’ll get to why that is in a moment, but first we should mention other tools in the interest of completeness.

  • Terraform is designed to let you declaratively describe the desired infrastructure and then work agnostically across whichever cloud environments you’re using. Created by Hashicorp, it focuses almost exclusively on cloud deployments.
  • Ansible is an open-source tool primarily for Unix (and related) systems (though it can manage other systems such as Windows). It uses a declarative language to create “playbooks,” which describe desired states. It’s approach is agentless, meaning that no software runs on the systems under management. While agentless can be advantageous in some cases, the downside is that certain system conditions can only be monitored from the managed system itself, so maintenance of desired states is sometimes harder than it needs to be. While Ansible is open source, it was acquired by Red Hat in 2015.
  • Chef comes more from the world of developers than of operations teams, but supports the infrastructure as code concept. It uses a Ruby-consistent language to create “recipes” that describe deployment and configuration procedures. While this recipe concept supports certain elements of the declarative approach, the general approach in Chef is imperative.
  • Puppet, like Chef, is based on Ruby, with configuration code tending to be more geared toward system administrators, while Chef is developer-centric. Puppet uses agents on the servers it manages.

What about SaltStack?

The SaltStack approach is declarative and event based. SaltStack can carry out management tasks either via agents, agentless, or proxy (API-based) control options.  Agentless options make it possible to handle resource-constrained devices such as network gear or those used in the Internet of Things.

Machines under SaltStack control, either via agent or agentless methods,  are called minions. The hubs where the descriptions of what the infrastructure should look like (these are called state files) are kept and from which commands and updates are sent to the minions are called masters.

One unique aspect of SaltStack is that it uses an event bus based on the ZeroMQ message library to enable fast, persistent, and event-driven communications between masters and minions. Originally developed for high-speed banking transactions, ZeroMQ is a message queue that performs services similar to several other queue services out in the world, but it’s a library used directly by the Salt code, rather than a service that needs to be separately installed and configured. It’s this ZeroMQ messaging backbone that gives SaltStack an important advantage: it’s fast enough to handle really large-scale environments with very fast updates and can automatically detect and respond to changes in critical systems. There are deployments in the wild that have 35,000 minions running from a single master (though SaltStack does enable both multiple and redundant masters). Linkedin, an early adopter of Salt, reports that their SRE team uses SaltStack event-driven automation to auto-remediate about 2,500 IT service tickets each day.

More recently, the rise of continuous deployment (CD) and security concerns that are raised by it has given rise to the notion of “security as code.” SaltStack has built a commercial product suite that takes full advantage of the Salt automation architecture  to create a discipline of SecOps within an organization. There’s more about SecOps here and here, plus a look at security as code.