With Kubernetes Continue to establish yourself as an industry standard for container layout , Finding effective ways to use declarative models for your applications and tools is the key to success . In this article , We will be in AWS Create a K3s Kubernetes colony , And then use Argo CD and Vault Achieve safe GitOps. You can view the infrastructure and... In the following two links Kubernetes umbrella Applications ：
Here are the components we will use ：
- AWS—— This is the cloud provider we will use in the underlying infrastructure . It will manage our virtual machines as well as Kubernetes The network needed for work , And allow Ingress From the outside into the cluster .
- K3s—— from Rancher Lightweight development Kubernetes Distribution version . It's for a lot of alpha Functions and cloud plug-ins are cleaned up , It also uses relational databases （ In this case, it is RDS） Instead of etcd Back end storage .
- Rancher——API Driven UI, It's easy to manage your Kubernetes colony
- Vault——Hashicorp Implementation of key management based on . I will use Banzai Cloud Vault Of bank-- vaults Realization , It can be used by Admission Webhook Inject the key directly into pod in . This greatly reduces the amount of time you're in Git The need to store keys in a warehouse .
- Argo CD—— This is a GitOps Tools , I can let you in Git In the maintenance Kubernetes The state of the resource .Argo CD Will automatically send your Kubernetes Resources and Git The resources in the warehouse are synchronized , At the same time, it also ensures that manifest Will be automatically restored . This ensures your declarative deployment pattern .
- Cert Manager or LetsEncrypt—— Provides a way for Kubernetes Ingress Methods for automatically generating and updating certificates .
Let's start with AWS Infrastructure starts .
You need to install the following in your system CLI：
meanwhile , You also need to AWS Administrator access and an access key . If you don't , You can use a credit card to create an account .
Last , You need someone who can manage / Updated managed domain name , To point to you based on Kubernetes Elastic load balancer （ELB）. If you haven't , I suggest that you NameCheap Open an account on the Internet , Then buy one .dev domain name . It's cheap , And it works .
For our AWS Infrastructure , We will use Terraform And S3 Support to persist state . This gives us a way to declaratively define our infrastructure , And make changes over and over again when we need to . In the infrastructure warehouse , You'll see one k3s/example.tfvars file . We need to adapt to our specific environment / Update this file with usage , Set the following values ：
- db_username — Apply to Kubernetes Back end stored RDS The administrator user name of the instance
- db_password — RDS Administrator's password . This should usually be in your terraform apply This parameter is passed during command inline , But for the sake of simplicity , We'll set it up in the file .
- public_ssh_key — Your public SSH secret key , When you need SSH To Kubernetes EC2s when , You're going to use it .
- keypair_name — To apply to your public_ssh_key Key pair name for .
- key_s3_bucket_name — Generated bucket Will store your kubeconfig file .
If you want to change the cluster size or set a specific CIDRs（ No inter domain routing ）, The following optional fields can be set , But by default , You'll get one 6 node （3 Servers ,3 A broker ） Of K3s colony .
meanwhile , You will need to create S3 bucket To store your Terraform State and in k3s/backends/s3.tfvars and k3s/main.tf Change in file bucket Field to match .
Once we update all the fields , And created S3 state bucket, And we started to apply Terraform Well . First , Make sure you're in AWS There is a manager in the account IAM Users and you have set the environment variables or AWS Voucher file , In order to be able to communicate with AWS API docking , Then run the following command ：
cd k3s/ terraform init -backend-config=backends/s3.tfvars terraform apply -var-file=example.tfvars
Once you execute the above order ,Terraform Will be in apply After successful output expected AWS state . If everything looks as expected , Please enter yes. At this time, due to RDS The reason for clustering , need 5—10 Minutes to configure AWS resources .
Verify your Kubernetes colony
Terraform After successful application （ Wait a few more minutes to make sure K3s Already deployed ）, You need to use the following command from S3 bucket To grab kubeconfig file （ Replace you in
example.tfvars The bucket name ）：
aws s3 cp s3://YOUR_BUCKET_NAME/k3s.yaml ~/.kube/config
This should be done successfully , So you can now communicate with your cluster . Let's check our node status , Before proceeding , Make sure they're all in place .
$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-1-208.ec2.internal Ready <none> 39m v1.18.9+k3s1 ip-10-0-1-12.ec2.internal Ready master 39m v1.18.9+k3s1 ip-10-0-1-191.ec2.internal Ready master 39m v1.18.9+k3s1 ip-10-0-2-12.ec2.internal Ready master 39m v1.18.9+k3s1 ip-10-0-2-204.ec2.internal Ready <none> 39m v1.18.9+k3s1 ip-10-0-1-169.ec2.internal Ready <none> 39m v1.18.9+k3s1
Let's see Argo CD The state of , It's through manifest Automatically deployed ：
$ kubectl get pods -n kube-system | grep argocd helm-install-argocd-5jc9s 0/1 Completed 1 40m argocd-redis-774b4b475c-8v9s8 1/1 Running 0 40m argocd-dex-server-6ff57ff5fd-62v9b 1/1 Running 0 40m argocd-server-5bf58444b4-mpvht 1/1 Running 0 40m argocd-repo-server-6d456ddf8f-h9gvd 1/1 Running 0 40m argocd-application-controller-67c7856685-qm9hm 1/1 Running 0 40m
Now we can continue to work for our ingress And certificate automation configuration wildcards DNS.
DNS To configure
about DNS, I passed Namecheap get atoy.dev domain name , But you can use whatever you like DNS supplier . What we need to do is create a wildcard CNAME entry , To route all requests to AWS ELB, It's managing the application's ingress.
First , By visiting your AWS The console gets your elastic load balancer host name —— Navigate to EC2 Section and click... On the left menu bar Load Balancers. Then you should see a new... Created using random characters LoadBalancer. If you check tag, It should quote your new Kubernetes colony .
You need to copy from that entry DNS name . For my domain name to visit NamecCheap senior DNS page , And enter the *.demo.atoy.dev Of CNAME entry . Point to you from AWS Copied domain name . According to your provider / The domain name adjusts the name ：
To verify that it works , You can install / Use nslookup To make sure it resolves to the correct host name ：
$ nslookup test.demo.atoy.dev Server: 22.214.171.124 Address: 126.96.36.199#53 Non-authoritative answer: test.demo.atoy.dev canonical name = a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com. Name: a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com Address: 188.8.131.52 Name: a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com Address: 184.108.40.206
Now to Umbrella Applications .
Argo CD and Umbrella Applications
We already know Argo CD It's already deployed , But now we're going to use Argo CD Of App-of-Apps Deploy the model to deploy the rest of our tool suite . Because we use GitOps, You need to k8s-tools-app Warehouse fork To your own Github On account , Then we need to make some changes to match your environment .
- You need to https://github.com/atoy3731/k... Make a global search / Replace , And change it to before fork New repository for git URL. This allows you to manage your environment , Give Way Argo CD You can pull it from there . in addition , Need to make sure your Git The warehouse is open , In order to Argo CD You can access it .
- stay resources/tools/resources/other-resources.yaml in , change argoHostand issuerEmail, Match it to your domain name and email .
- stay resources/tools/resources/rancher.yaml in , Change the host name and mail to match the respective domain name and email.
- stay resources/apps/resources/hello-world.yaml in , Put two references app.demo.aptoy.dev Change to match your domain name .
Once you do these updates , Continue to submit / Push your changes to your forked Github Warehouse . Now you're ready to apply umbrella The app . Do the following in the warehouse of the local clone ：
$ kubectl apply -f umbrella-tools.yaml appproject.argoproj.io/tools created application.argoproj.io/umbrella-tools created
Now? ,Argo CD Will start configuring all the other tools , These tools are defined by the repository for your cluster . You can get a list of deployed applications by doing the following ：
$ kubectl get applications -n kube-system NAME AGE other-resources 56m umbrella-tools 58m rancher 57m vault-impl 57m vault-operator 58m vault-webhook 57m cert-manager 57m cert-manager-crds 58m
You will have to wait 5 Minutes or so , Get everything ready , Give Way LetsEncrypt Generate temporary certificate . Once things work as expected , You should see two generated Ingress entry , You can access it through a browser ：
$ kubectl get ingress -A NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE cattle-system rancher <none> rancher.demo.atoy.dev a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com 80, 443 59m kube-system argocd-ingress <none> argo.demo.atoy.dev a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com 80, 443 58m
NOTE 1： for fear of LetsEncrypt Any rate limit for , We are using an invalid staging Certificate . One of the advantages of this is that when you access Argo、Rancher Or yours hello world Applications , It will give you one SSL abnormal . Use Chrome browser , Enter... When your exception page loads thisisunsafe, It will get you around it . You can also learn about updates Cert-manager Of ClusterIssuer, To use production level trusted certificates .
NOTE 2：K3s It's pre installed with a Traefik As ingress controller, For the sake of simplicity , We use it directly .
NOTE 3： For the first time login Rancher after , You need to generate a password , And used to access Rancher Of URI.URI It should be preloaded in the form , So you can just click "Okay".
NOTE 4： To log in Argo CD, It USES admin As user name , Use argocd-server pod Name as password . You can get this server's pod name （ In this case, it is argocd-server-5bf58444b4-mpvht）.
$ kubectl get pods -n kube-system | grep argocd-server argocd-server-5bf58444b4-mpvht 1/1 Running 0 64m
Now you should be able to access Argo CD UI, Log in and view , As shown below ：
Now that our tools have been deployed , Let us in Vault Store keys in , In order to hello world Application extraction .
stay Vault Create a key in
To make things simple , There is a help script in your toolkit . Run the following command to get Vault Administrator token and port forward command ：
$ sh tools/vault-config.sh Your Vault root token is: s.qEl4Ftr4DR61dmbH3umRaXP0 Run the following: export VAULT_TOKEN=s.qEl4Ftr4DR61dmbH3umRaXP0 export VAULT_CACERT=/Users/adam.toy/.vault-ca.crt kubectl port-forward -n vault service/vault 8200 & You will then be able to access Vault in your browser at: [https://localhost:8200](https://localhost:8200)
Run the output command , Then navigate to https://localhost:8200. Enter... Above root token Log in .
When you log in , You should be on a key engine page . Click on Secret/ entry , Then click create key at the top right . We're going to create one demo secret key , So add the following and click save ：
Now we've done something for hello world The application has the key ready .
Deploy Hello World Applications
Now? , Back to our parent version , Let's run the following code to deploy hello world Applications ：
$ kubectl apply -f umbrella-apps.yaml appproject.argoproj.io/apps created application.argoproj.io/umbrella-apps created
After creation , go back to Argo CD UI, You should see two new applications first ,umbrella-apps and demo-app. single click demo-app, Then wait for all the resources to run normally ：
Once the state is healthy after , You should be able to access https://app.YOUR-DOMAIN Navigate to your app .
Let's also test our results Vault Whether the key is injected into our application pod in . stay Argo CD UI Of demo-app in , Click on one of the applications Pod, Then click on the log tab at the top . There should be two containers on the left , choice test-deployment Containers . At the top of the log , You should see that your key is between two equal lines ：
Now let's test Argo CD, Make sure it synchronizes automatically when we make some changes in the warehouse .
In your library , find resources/apps/resources/hello-world.yaml file , take replicaCount From 5 Change to 10. Submit and push your changes to the main branch , And then in Argo CD UI Navigate back to demo-app. When Argo CD When it reaches its refresh interval , It will automatically start deploying other 5 A copy of the application （ If you don't want to wait , You can click on your umbrella-apps Argo Applications ** The refresh button in ）：
If you're going to tear down your cluster , You need to get in first AWS Console 、EC2 service , And then click Load Balancers. You'll see Kubernetes The cloud provider created a ELB, But not by Terraform Managed , You need to clean it up . You also need to delete the ELB Security group in use .
Cleaned up ELB after , Run the following command and enter... When prompted yes：
terraform destroy -var-file=example.tfvars
What's next ？
We already have a good toolset to use GitOps Deploy application . So what's next ？ If you're willing to take on the challenge , You can try in hello world Deploy your own app next to the app , Even try to apply manifest Update the image label in the warehouse to achieve CI/CD. thus , When building a new application image , The new label will be on the manifest Automatically update in the warehouse ,Argo CD A new version will be deployed .