Skip to content

Startup Scripts#

When you create an EC2 Instance, you have the opportunity to execute a script when it boots up. This script can do anything you like, allowing you to provision the system with the software and services you need (and beyond that.) I call these scripts User Data scripts, because they're used in the "User data" field within EC2 when you're launching an Instance.

There are two ways types of scripts you can use when you provision an EC2 Instance with a User Data script:

  1. With a shell script
  2. With cloud-init "directives"

AWS also provides AWS OpsWorks, but as I've said previously, I recommend avoiding certain tools offered by Cloud providers as they act as a form of lock-in.

When you provide a script to an EC2 Instance, to be executed on launch, then that script runs as the root user and is able to do anything:

  • Install software
  • Create users
  • Mount disks (EBS Volumes, EFS, etc.)
  • Run any scripts it likes
  • Download assets from the Internet
  • Delete itself

Anything. We have to be careful when writing such scripts to ensure they do exactly what we need them to and nothing else.

We'll cover the theory and show some examples of these techniques below. We won't use any of these scripts right now, but we will use these skills when we create an EC2 Instance.

Shell Scripts vs cloud-init#

We've looked at Bash script writing previously. It's my preferred method of provisioning an EC2 Instance and it's the only option I've ever seen deployed by businesses, minus one client. We'll cover cloud-init directives as well, but shell scripts are a much better option.

With a shell script, you're learning actual skills that can be taken with you between systems:

  • You can use Bash scripting skills outside of EC2 Instance launch scripts
  • You'll use Bash scripting inside of CI/CD pipelines - I guarantee it
  • Bash is just a syntax, and the art of scripting can be tralsted to other language like PowerShell and Python

Ultimately, it's the superior way of handling most automation tasks where an actual propgramming language (Python, Go, ...) isn't needed.

The problem with cloud-init is the skills are useful, and they do move between Cloud providers, but I don't consider the time and energy investment worth it. Not when you can get better and better at Bash, a skill that transfers in so many more ways than cloud-init does.

I'll provide examples of both, but I recommend you stick to bash scripting related options instead.

How it works#

Let's look at examples for both the Bash scripting method and the cloud-init method too.

Bash Scripting#

When considering the Bash scripting option, you essentially write a script that calls commands specific to the Linux distribution that you're using. Let's pretend that's Ubuntu 20.04. Then when you create the EC2 Instance, you use the "User data" field (hence, User Data script) to supply your script. That's it.

Let's assume this is our script:

1
2
3
4
5
6
7
#!/bin/bash

apt update
apt upgrade -y
apt install nginx -y
systemctl enable nginx
systemctl start nginx

You should be able to determine what this does with ease, but let's break it down anyway:

  1. We have the shebang at the beginning of the script: #!/bin/bash
  2. We use apt to update the known packages and download repository meta data
  3. We use apt to upgrade the entire system, upgrading all packages, including the kernel
  4. We then use apt to install the nginx package
  5. Finally we use systemctl to enable and then start the nginx service that we get from the nginx package

Simple enough. To use this script, we would provide it to the "User data" field in the EC2 console.

cloud-init#

Let's repeat what we did above, but this time with cloud-init directives:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#cloud-config
repo_update: true
repo_upgrade: all

packages:
  - nginx

runcmd:
  - sudo systemctl enable nginx
  - systemctl start nginx

So instead of writing a Bash script, we write YAML to "describe" to cloud-init what it is we want to do via "directives". Everything about this YAML file is doing the same thing as our Bash script, but it's more convoluted, and I believe it raises more questions than it answers.

The #cloud-config line is like a Bash script shebang - it tells the EC2 Instance it's a cloud-init directives file, not a shell script.

This same file is supplied to the EC2 Instance in the same way: via the "User data " field.

Use cases#

When would you use the user data scripting functionality to provision a system on boot? The answers to that are basically limitless. There are endless reasons for wanting to do this kind of thing, but here are a few use cases I've discovered in my time.

I've seen them used for complete system provisioning. End to end. They're executed against a blank EC2 Instance with nothing but the base OS installed. The scripts provisions everything and anything that's required. This isn't ideal. It's extremely slow, which means the boot time of the new system is also slow. There's better ways of doing this.

I've actually seen setups where it's simply not used at all, or at most it was used to add additional SSH keys or users to a system. Instead, the EC2 Instances used AMIs that had has much as possible baked into them. This is the most ideal solution. It allows for very fast boot times, but it comes with the added work of managing and "baking" (as it's known) new AMIs when you need to update the system software.

There are middle ground, but it really depends on the needs of the business. Knowing what the right answer is comes with time, with experience. Obviously not possible to define and outline all possible use cases and solutions here.

How I use User Data scripts#

That all being said, here's how I use these user data scripts: I don't.

I like to provision my systems with a tool called Ansible. It's a tool we cover in Level Two. It's designed to do multiple things, firstly to abstract away the host machine behind a consistent configuration language, and secondly to provide all your configuration as code. Both of these are powerful options to have, but we'll explore that more in Level Two.

Secondly, I use another tool called Packer to build an AMI for me. We'll cover that in Level Two as well.

Packer creates an EC2 Instance for me and uploads (via an SSH connection) a few files to it, namely a systemd .timer and a .service file. It also install Ansible on the host, which is what the .service file calls for me (and the .timer triggers the .service every minute.) Once the AMI has been created, the Instance is turned into an AMI and then the live Instance is terminated. Job done.

When I create an EC2 Instance from that AMI, Ansible fires off every minute, looking for its code to run from a remote GitHub repository. It download that code and executes it.

This allows me to update existing, running systems by updating the Ansible code (because it'll be downloaded and executed a minute later), and newly created Instances will download that new code too.

Summary#

The User Data script concept is a powerful one, and it can be tempting to load the script with a tonne of commands and have it do everything, but there are better ways that we explore in Level Two.