By default, Argo CD has only one built-in user admin. If you want to create new users, you must configure k8s configmaps.

In this example, I will explain how to create local users, custom permissions for the users and setting password. I installed argocd with helm at https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd

We are going to update config: and rbacConfig:section of the helm chart values in values.yaml

Create users

We will create three users(qauser, devuser and adminuser) add them with accounts.username: login statement as below in config: section of values.yaml. Actually it will update argocd-cm configmap

config:
# Argo CD's externally facing base URL (optional)…


MongoDB Atlas documentation recommends using the following mongodump command with URI syntax to dump a database. But it does not work well with as reported at https://github.com/golang/go/issues/37362

$ mongodump --uri="mongodb+srv://dbUser:mypass@mycluster.jnszz.mongodb.net/mymongodb"2021-03-03T19:15:19.165+0000 error parsing command line options: error parsing uri: lookup mycluster.jnszz.mongodb.net on 127.0.0.53:53: cannot unmarshal DNS message2021-03-03T19:15:19.165+0000 try 'mongodump --help' for more information

For that reason, I use --host syntax. It requires a replica set name. To learn replica set name you can connect to db usingmongo with the following SRV format.

$ mongo "mongodb+srv://dbUser:mypass@mycluster.jnszz.mongodb.net/admin?retryWrites=true&w=majority"
MongoDB shell version v4.2.12 …

Source: https://kreuzwerker.de/post/aws-multi-account-setups-reloaded

In this article, I will explain what needs to be done to implement multi aws accounts with AWS CLI step by step. I am planning to create story series for AWS Multi-Account deployment.

AWS Accounts

We will create the following child accounts under an AWS Organization.

security
mgmt
dev
stage
prod

Architecture

  • Create all IAM users in security account
  • Create dev, admin roles in dev,stage, prod and mgmt accounts. Grant access to these roles from a security account.
  • Create policies(i.e. one policy with limited dev permissions, another policy with full admin permissions on target accounts) in security account to allow assuming role on…


Falco is a Kubernetes threat detection engine. Falco supports Kubernetes Audit Events to track the changes defined in k8s audit rules made to your cluster.

But unfortunately, AWS EKS is a managed Kubernetes service, and it only can send audit logs to CloudWatch. It means there is no direct way for Falco to inspect the EKS audit events.

We need to implement a solution to ship audit logs from CloudWatch to Falco. There are two solutions.

  1. https://github.com/sysdiglabs/ekscloudwatch
  2. https://github.com/xebia/falco-eks-audit-bridge (I will not recommend this solution. …


If you are running Jenkins on AWS EC2 instance, you can push build docker image on Jenkins and push to ECR registry without creating credentials on Jenkins

You can get a sample Jenkinsfile from my gist.

This pipeline login to ECR and build docker image from Dockerfile in my git repo and pushed to my ECR nginx registry.

the key line is in this Jenkinsfile is
eval $(aws ecr get-login --region "$AWS_REGION" --no-include-email)

In order to build docker image and push to ECR

  1. you must install docker and awscli on Jenkins instance.
  2. Give docker access to jenkins user by…


I am working on creating a CloudFormation stack with terraform resource.

I need to define tags in CloudFormation YAML file in addition to terraform resource.

I added the following lines to YAML template file to make it work.

%{~ if length(mytags) >0 ~}
Tags:
%{~ endif ~}
%{~ for tag_key, tag_value in mytags ~}
- Key: "${tag_key}"
Value: "${tag_value}"
%{~ endfor ~}
You can see all details in my following gist and check https://www.terraform.io/docs/configuration/expressions/strings.html for strings and templatefile handling.

Ismail YENIGUL

Devops Engineer


If you create an aws_directory_service_directory and change password parameter, terraform will destroy and create AD resource.

provider "aws" {
region = "us-west-2"
}
resource "random_password" "mypassword" {
length = 20
special = true
override_special = "_$%"
min_special = 1
}resource "aws_directory_service_directory" "mysimplead" {
name = "test.simplead"
type = "SimpleAD"
description = "my simple AD"
password = random_password.mypassword.result
alias = "mysimple"
size = "Small"
vpc_settings {
vpc_id = "vpc-005d9xyz2658a45"
subnet_ids = ["subnet-015fcx166c80407", "subnet-0d0f1x6d5919b8"]
}
}

I just edited mypassword values then run terraform apply

# aws_directory_service_directory.mysimplead must be replaced
-/+ resource "aws_directory_service_directory" "mysimplead" {
~ access_url = "mysimple.awsapps.com" …


When you create an EKS cluster, AWS creates an EKS cluster public endpoint address something like ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com This DNS record returns two IP addresses.

$ host ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com
ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com has address 5.195.16.196
ACB6D2xyzC0ADBDA7833C.sk1.eu-west-1.eks.amazonaws.com has address 18.203.6.19

Unfortunately, these IPs are not static (Elastic IP) it can be changed time to time. It is fine you don't apply any restriction to access this endpoint.

But in my case, I only allow access to endpoint from OpenVPN running inside the VPC. I was assuming that these IPs will never change, and I was pushing routes to the VP clients in OpenVPN config.

But…


I have two Pfsense firewalls for two sites. Sites are connected to each other with Pfsense IPsec tunnel.

Today I experienced a strange issue. I can ssh from one site to other without any issue but I can’t ssh from the other site.

$ telnet 192.168.1.100  22
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
SSH-2.0-OpenSSH_7.4

I can get SSH banner with telnet but ssh -l user 192.168.1.100 does not work. The ssh connection stays in expecting SSH2_MSG_KEX_ECDH_REPLY state

debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com …

You can use traefik 2 ipwhitelist middleware to limit clients to specific IPs

See details for https://doc.traefik.io/traefik/middlewares/ipwhitelist/

an example config for this middleware.


container_name: frontend
restart: always
labels:
- traefik.http.routers.frontend.rule=Host(`mywebsite.com`)
- traefik.http.services.frontend.loadbalancer.server.port=80
- traefik.http.middlewares.dashboardwhitelist.ipwhitelist.sourcerange=127.0.0.1, 1.2.3.4

But unfortunately, this will never work! I had to do some research to make it work. I found the answer at https://github.com/traefik/traefik/blob/master/integration/resources/compose/whitelist.yml

In these examples, traefik.http.routers.$routername.middlewares label is used to register middleware into the router.

After adding traefik.http.routers.frontend.middlewares=dashboardwhitelist

label, then it started working! The name of the middleware is not important. …

ismail yenigül

Devops Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store