I am sending EKS logs to ELK stack. It is working great, but AWS ELB health check creates lots of logs and does not let me track other events properly on Kibana dashboard.

My images are based on nginx. Here is nginx configuration to disable nginx logging for ELB-HealthChecker/2.0

First, create a map for user agent and set new access_log in related config parts then create new docker image. I updated /etc/nginx/conf.d/default.conf in this example. Because I am using following statement in Dockerfile. You should update the correct nginx conf file based on your Dockerfile configuration.

COPY nginx.conf /etc/nginx/conf.d/default.conf

#Disable…

I am working on terraform modules in private git repos. I use my default ssh key(~/.ssh/id_rsa) on macOS for my own personal git repos.

GitHub does not allow using same key for other git account. I created another ssh key pair and uploaded to GitHub. But git clone or terraform init still uses default key pair. Running ssh-agent and adding my new key to ssh agent with ssh-add did not help.

Solution

Use GIT_SSH_COMMAND env variable.
Create new ssh key, Add new public key to Github and set GIT_SSH_COMMAND

$ ssh-keygen -t rsa -f ~/.ssh/mynewssh
$ export GIT_SSH_COMMAND="ssh -i ~/.ssh/mynewssh"
$ ssh-agent
$ ssh-add ~/.ssh/mynewssh
$ terraform init
on Fish shellset -gx GIT_SSH_COMMAND "ssh -i ~/.ssh/mynewssh"

Ismail YENIGUL

Devops Engineer


By default, Argo CD has only one built-in user admin. If you want to create new users, you must configure k8s configmaps.

In this example, I will explain how to create local users, custom permissions for the users and setting password. I installed argocd with helm at https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd

We are going to update config: and rbacConfig:section of the helm chart values in values.yaml

Create users

We will create three users(qauser, devuser and adminuser) add them with accounts.username: login statement as below in config: section of values.yaml. Actually it will update argocd-cm configmap

config:
# Argo CD's externally facing base URL (optional)…

MongoDB Atlas documentation recommends using the following mongodump command with URI syntax to dump a database. But it does not work well with as reported at https://github.com/golang/go/issues/37362

$ mongodump --uri="mongodb+srv://dbUser:mypass@mycluster.jnszz.mongodb.net/mymongodb"2021-03-03T19:15:19.165+0000 error parsing command line options: error parsing uri: lookup mycluster.jnszz.mongodb.net on 127.0.0.53:53: cannot unmarshal DNS message2021-03-03T19:15:19.165+0000 try 'mongodump --help' for more information

For that reason, I use --host syntax. It requires a replica set name. To learn replica set name you can connect to db usingmongo with the following SRV format.

$ mongo "mongodb+srv://dbUser:mypass@mycluster.jnszz.mongodb.net/admin?retryWrites=true&w=majority"
MongoDB shell version v4.2.12 …

Source: https://kreuzwerker.de/post/aws-multi-account-setups-reloaded

In this article, I will explain what needs to be done to implement multi aws accounts with AWS CLI step by step. I am planning to create story series for AWS Multi-Account deployment.

AWS Accounts

We will create the following child accounts under an AWS Organization.

security
mgmt
dev
stage
prod

Architecture

  • Create all IAM users in security account
  • Create dev, admin roles in dev,stage, prod and mgmt accounts. Grant access to these roles from a security account.
  • Create policies(i.e. one policy with limited dev permissions, another policy with full admin permissions on target accounts) in security account to allow assuming role on…

Falco is a Kubernetes threat detection engine. Falco supports Kubernetes Audit Events to track the changes defined in k8s audit rules made to your cluster.

But unfortunately, AWS EKS is a managed Kubernetes service, and it only can send audit logs to CloudWatch. It means there is no direct way for Falco to inspect the EKS audit events.

We need to implement a solution to ship audit logs from CloudWatch to Falco. There are two solutions.

  1. https://github.com/sysdiglabs/ekscloudwatch
  2. https://github.com/xebia/falco-eks-audit-bridge (I will not recommend this solution. …

If you are running Jenkins on AWS EC2 instance, you can push build docker image on Jenkins and push to ECR registry without creating credentials on Jenkins

You can get a sample Jenkinsfile from my gist.

This pipeline login to ECR and build docker image from Dockerfile in my git repo and pushed to my ECR nginx registry.

the key line is in this Jenkinsfile is
eval $(aws ecr get-login --region "$AWS_REGION" --no-include-email)

In order to build docker image and push to ECR

  1. you must install docker and awscli on Jenkins instance.
  2. Give docker access to jenkins user by…

I am working on creating a CloudFormation stack with terraform resource.

I need to define tags in CloudFormation YAML file in addition to terraform resource.

I added the following lines to YAML template file to make it work.

%{~ if length(mytags) >0 ~}
Tags:
%{~ endif ~}
%{~ for tag_key, tag_value in mytags ~}
- Key: "${tag_key}"
Value: "${tag_value}"
%{~ endfor ~}
You can see all details in my following gist and check https://www.terraform.io/docs/configuration/expressions/strings.html for strings and templatefile handling.

Ismail YENIGUL

Devops Engineer


If you create an aws_directory_service_directory and change password parameter, terraform will destroy and create AD resource.

provider "aws" {
region = "us-west-2"
}
resource "random_password" "mypassword" {
length = 20
special = true
override_special = "_$%"
min_special = 1
}resource "aws_directory_service_directory" "mysimplead" {
name = "test.simplead"
type = "SimpleAD"
description = "my simple AD"
password = random_password.mypassword.result
alias = "mysimple"
size = "Small"
vpc_settings {
vpc_id = "vpc-005d9xyz2658a45"
subnet_ids = ["subnet-015fcx166c80407", "subnet-0d0f1x6d5919b8"]
}
}

I just edited mypassword values then run terraform apply

# aws_directory_service_directory.mysimplead must be replaced
-/+ resource "aws_directory_service_directory" "mysimplead" {
~ access_url = "mysimple.awsapps.com" …

When you create an EKS cluster, AWS creates an EKS cluster public endpoint address something like ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com This DNS record returns two IP addresses.

$ host ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com
ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com has address 5.195.16.196
ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com has address 18.203.6.19

Unfortunately, these IPs are not static (Elastic IP) it can be changed time to time. It is fine you don't apply any restriction to access this endpoint.

But in my case, I only allow access to endpoint from OpenVPN running inside the VPC. I was assuming that these IPs will never change, and I was pushing routes to the VP clients in OpenVPN config.

But…

ismail yenigül

Devops Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store