Deploying Production Ready Rundeck On EKS With One Command

tagore
8 min readFeb 1, 2021

Rundeck is an open source automation tool acquired by pagerDuty which can be used to provide self service capabilities to teams across your organization. Considering you are reading this post I assume you already know enough about the tool and investigating an ideal a way to deploy open source version of it in your enterprise. In case not follow the link below to learn more about Rundeck.

Intent of this post is to present a walk through of all the necessary steps to deploy a rundeck instance in a production ready configuration.

What do I mean by production ready?

Below are the requirements I considered to qualify rundeck deployment as a production ready, I have also included how that requirement is met with a statement followed by pipe.

  • Resilient infrastructure | Deployed on kubernetes
  • Persistent rundeck configuration | Project and Job Configurations in RDS
  • Execution Log backups | Execution logs backed up to s3 bucket
  • AD Integration | Integration with existing AD with LDAP
  • RBAC using custom ACL’s | ACL’s mounted to pod using config map
  • Plugins | Bootstrapping necessary plugins using initContainer

Sorry to click bait you, but we both know it is not possible to roll out entirety of whats needed to rollout rundeck with above mentioned configuration with just a single command. However, In this post I will walk you through all the holistic steps to create and configure the needed dependencies through the pre-requisites section below and towards the end of this post we will be deploying rundeck instance on Kubernetes(EKS) cluster using a community based helm chart I’m hosting here. If it makes it any better the final step to deploy using helm client is a single command. :p

Pre-requisites:

Things needed to be configured before we actually get to the one command deployment.

1. Customize Image:

To enable and leverage rundeck for automating many of the ops procedures we need to equip the instance with the necessary tooling such as aws and vault CLI’s. Can be any others depending on your use case.

In a typical instance where rundeck is being deployed on a virtual machine this would be hand configured by an ops team or done by using any of the configuration management tools like ansibe or chef. Since we are aiming to deploy this on kubernetes we would do this by baking such necessary utilities into the container image.

Above is an example docker file, I have created for this post. which was built and pushed to a remote registry as shown below.

$ docker build -t tagore22/rundeck:3.3.4 docker/
Sending build context to Docker daemon 2.56kB
Step 1/7 : ARG RUNDECK_IMAGE
Step 2/7 : FROM ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.4}
---> 3bc7314c7adf
.
.(omitted)
.
Successfully built 56c6556f29c8
Successfully tagged tagore22/rundeck:3.3.4
$ docker push tagore22/rundeck:3.3.4
The push refers to repository [docker.io/tagore22/rundeck]
f1b0c2e80682: Layer already exists
.
.(omitted)
.
a6ebef4a95c3: Layer already exists
b7f7d2967507: Layer already exists
3.3.4: digest: sha256:e328c9e911e2239552bf5b00562cbe4c03e58bba49081322d184dad6ce21a79f size: 3675

2. Ingress Controller

Kubernetes ingress mechanism is the defacto method to expose web applications running inside a kubernetes cluster. We will be configuring ingress for the rundeck pod using helm values later in this article but before we do that a cluster wide ingress controller needs to be deployed.

There are several options to choose from when it comes to ingress controllers, Nginx’s ingress nginx is the most widely used controller. Instructions to deploy it on EKS can be found here.

  • Route53
    Decide on the hostname you want to use to access the rundeck service after it’s deployed and Create a CNAME to the to the load balancer endpoint created by the ingress controller deployment. We will later use this value to configure rundeck pod at the time of initialization this value will be passed to helm deployment using an custom-values.yaml

3. Database

Rundeck can be configured to store all the project and job configuration to a local file system, H2 database or a remote MySql or Postgres database. For any production use It is highly recommended to use an external Postgres or MySql database.

For this post I have created an postgres database using AWS RDS and configured a user for rundeck with read write permissions on the database. These credentials will be supplied later as environment variables mounted to pod using custom-values.yaml.

4. Active Directory

Create a bind user on active directory which has access to all other users in the organization.

Note the credentials needed for the bind user, the credentials will be later mounted directly to rundeck pod as an environment variable using custom-values.yaml

5. AWS IAM User

Create an IAM user and access credentials for the user in your aws environment using console/CLI. These credentials can be used to configure s3 log storage plugin.

This is an optional step, and can be skipped if automatic log backups are not really needed for your environment.

6. AWS S3 bucket

Create an S3 bucket in your AWS account and provide the user created above needed access to read and write to the bucket. In my case I have created a bucket named ‘eks-rundeck-logs’ and this value is provided through custom-values.yaml file.

7. Namespace

Kubernetes namespaces provide a logical network and resource isolation and it is a best practise to deploy services in their own dedicated namespace. Run below command to create a new dedicated instance for rundeck.

$ kubectl create namespace rundeck 

8. Secrets

We need to create Kubernetes secrets to supply the values which we ideally cannot pass in directly as plain text from custom-values.yaml.

  • Create a kubernetes secret to provide secrets that need to be mounted as environment variables to rundeck pod. These user accounts were created in the Database and Active Directory pre-requisite steps.
$ kubectl create secret generic rundeck-env-secret -n rundeck \
--from-literal=RUNDECK_DATABASE_PASSWORD="MySecretDBPass" \
--from-literal=RUNDECK_JAAS_LDAP_BINDPASSWORD=="MySecretBindPass"
  • Create an kubernetes secret with an aws credentials file which will be mounted into the rundeck pod at runtime.
$ export AWS_ACCESS_KEY_ID=<real-key-id-goes-here>
$ export AWS_SECRET_ACCESS_KEY=<real-secret-key-goes-here>
$ echo "[default] aws_access_key_id = $AWS_ACCESS_KEY_ID aws_secret_access_key = $AWS_SECRET_ACCESS_KEY" > /tmp/aws-creds$ kubectl create secret generic aws-access-keys-secret -n rundeck \
--from-file=credentials=/tmp/aws-creds
  • Rundeck uses SSH mechanism to execute commands remotely on the configured nodes. So, create SSH key or just use the one you already have and create a new kubernetes secret off it.
$ kubectl create secret generic ssh-keys-rundeck -n rundeck \
--from-file=rundeck=<path-to-the-ssh-private-key>
  • Create a secret with the certificates you wanted to use for TLS termination on ingress controller.
$ kubectl create secret tls rundeck-tls-secret -n rundeck \
--cert="<path-to-cert-chain>" --key="<path-to-private-key>"

9. Create config map with custom ACL’s

On a typical rundeck instance hosted on a VM, access control policies can be configured by creating ACL files in the $RUNDECK_HOME/etc directory. But the same cannot be applied to a rundeck server on kubernetes as by default no data in the container is persisted when the container gets rescheduled or restarted for whatever reason. This limitation can be hacked around using init contianers and shared volume mounts. Below is my hack to overcome this limitation.

Note: ACL’s cannot be directly mounted to rundeck as rundeck needs full permissions to $RUNDECK_HOME/etc directory.

Step1: Create a configmap with ACl data before deploying the rundeck.

Step2: Add an init container to the deployment using a custom values.yaml.

Step3: Init container created has two volume mounts one is a mounted configmap and second is a shared emptyDir mounted to rundeck container at ‘$RUNDECK_HOME/etc’ location

Step4: As a part of initialization, init container redirects contents of the configmap from it’s first volume to shared volume which rundeck reads during initialization and picks up the configured ACL policies.

Below is the configmap with ACL’s for admin and developers groups.

above configmap can be created by using command below.

$ kubectl apply -f https://gist.githubusercontent.com/dev-korr/35da4ba0da0c4d79861a115499d2b9a7/raw/5927ed11cb32cde54fb2319313bc9d5984631e6e/rundeck-acl-config-map.yaml

10. Helm3

We will be using slightly improvised rundeck’s community helm chart I hosted here for deploying rundeck to kubernetes in later steps. So if you haven’t already it’s time to configure your helm client. Instructions to configure helm client can be found here.

Preparing custom helm values.yaml

Below is the template reference values.yaml that can be used, Update all the necessary values in the below file based on the correct values observed after performing the above prerequisite steps in your environment. I have also added necessary comments in the template file below to walk through the configuration.

Plugins

Plugins provided as command args to init container will be will be downloaded whenever new pod is created. Find few example plugins installed in the template-values.yaml below.

Deploying to kubernetes cluster

Finally! After all the above configuration is in place, you can run the promised one liner to deploy the rundeck instance to kubernetes cluster.

# Clone the repo with rundeck helm chart.
$ git clone https://github.com/dev-korr/rundeck.git && cd rundeck
# Install rundeck using helm.
$ helm install rundeck charts/rundeck \
--namespace=rundeck \
--values=<your-custom-values>.yaml
NAME: rundeck
LAST DEPLOYED: Sun Jan 31 01:41:43 2021
NAMESPACE: rundeck
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
https://rundeck-eks.example.com

Given all the secrets and configuration is created correctly your rundeck pod should be deployed and be running in few minutes on the kubernetes cluster. You can verify if the rundeck pod is running and healthy by using command below.

$ kubectl get pods -n rundeckNAME                      READY   STATUS    RESTARTS   AGE
rundeck-d5d9f44cc-57z6g 2/2 Running 0 3m

After pod is in running state, UI should be accessible at the URL presented after helm install and anyone should be able to login using an appropriate active directory user credentials you have integrated your rundeck instance with. On successful login you’ll be presented with landing page as below.

After creating a Job and running it navigate to System → Log Storage to verify automatic S3 log backups are working correctly as expected, Below is the screenshot of how it should look if configured correctly.

Conclusion

Thanks for reading, I hope this post was helpful in providing an holistic overview of deploying open source rundeck on Kubernetes. Feel free to reach out or commenting your thoughts and questions.

--

--

tagore

Dog Lover, Technology Enthusiast and Site Reliability Engineer