Serve rpm repository from AWS S3

Some notes on how was setup to be serve custom rpm packages from an AWS S3 bucket, mostly based on .

Make sure the aws cli is setup on the workstation:

mkdir -p ~/.aws
cat <<EOF >~/.aws/config
region = us-east-1

cat <<EOF >~/.aws/credentials
aws_access_key_id = AWS_ACCESS_KEY
aws_secret_access_key = AWS_SECRET_ACCESS_KEY

From the workstation, create a new S3 bucket. If you intend to use your own domain name, it’s important that the bucket has the same name as the domain you intend to use.

Also create a user (to upload files to the bucket) and give this user access to your S3 buckets.

aws s3 mb s3://
aws iam create-user --user-name zonalivre-rpm-repo
aws iam create-access-key --user-name zonalivre-rpm-repo
cat <<EOF >zonalivre-rpm-repo_policy.json
    "Version": "2012-10-17",
    "Statement": [
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": ["*", ""]
        "Effect": "Allow",
        "Action": "s3:ListAllMyBuckets",
        "Resource": "*",
        "Condition": {}
aws iam put-user-policy --user-name zonalivre-rpm-repo --policy-name zonalivre-rpm-repo-bucket-access --policy-document file://zonalivre-rpm-repo_policy.json

From the workstation, check user is configured correctly

aws iam list-user-policies --user-name zonalivre-rpm-repo
    "PolicyNames": [

On the S3 console, click “Properties”->”Static Website Hosting”->”Enable static website hosting”. You’ll also need to provide an Index document. Just enter index.html.

Create a DNS alias to point to the new bucket. After this is setup and propagated, it should look like:

host is an alias for is an alias for is an alias for has address

Install required packages on build server. In this particular case, packages are installed as per chef recipe .

The important one here is s3cmd, available as part of the epel repo.

On the build server, configure s3cmd

s3cmd --configure
s3cmd ls
2016-01-24 09:03  s3://

Still on the build server, generate a public/private GPG key pair.
See for a number of gotchas around using gpg keys to sign rpms. Some of the bugs described in the article may have been fixed by now.

gpg --gen-key
# Choose signing only RSA key
# Your key cannot have any subkeys
# Your key must be > 1024-bit (i used 2048)

gpg needs randomness to generate keys. The more entropy there is in your system, the quicker the keys will be generated. You can check how much entropy is available with:

watch -n 1 cat /proc/sys/kernel/random/entropy_avail

There are multiple ways of generating additionally entropy so that key generation happens quicker, such as moving your mouse around, hitting the keyboard, or using specialized software. A quick and dirty way of generating entropy is to run this in a separate shell:

find / -type f | egrep -v "(/dev|/proc|/sys/kernel)" | xargs md5sum

Once key generation completes, list generated keys:

gpg --list-keys
pub   2048R/859031CB 2016-01-24
uid                  Builder <>

And export the public key:

gpg --output ~/RPM-GPG-KEY-zonalivre-rpm-repo --armor --export 859031CB

Add gpg rpm macros and remember to replace YOUR_GPG_KEY_ID .
In my case, this is 859031CB.

cat <<EOF >~/.rpmmacros
%signature gpg
%_gpg_path $HOME/.gnupg
%_gpg_name <YOUR_GPG_KEY_ID>
%_gpg_bin /usr/bin/gpg
%packager Builder <>
%_topdir $HOME/rpmbuild

Now import GPG public key into rpm

rpm --import ~/RPM-GPG-KEY-zonalivre-rpm-repo

Confirm key has been imported into rpm

rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n' | grep zona
gpg-pubkey-859031cb-56a4c669 --> gpg(Builder <>)

Create the .repo file

cat <<EOF >~/zonalivre-rpm.repo
name=name=Extra Packages from Zonalivre RPM Repository -
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zonalivre-rpm-repo

Create rpmbuild tree


To package this repo as an rpm, create a spec file

cat <<EOF >~/rpmbuild/SPECS/zonalivre-rpm.spec
# Zonalivre RPM Repository configuration files and GPG key
%define name zonalivre-rpm-repo
%define version 1
%define release 0.1
%define buildroot %{_topdir}/%{name}-%{version}-root
BuildArch:  noarch
BuildRoot:  %{buildroot}
Summary:    Zonalivre RPM Repository
License:    MIT
Name:       %{name}
Version:    %{version}
Release:    %{release}
Group:      Development/Tools

Package containing Zonalivre RPM Repository configuration files and GPG key.

exit 0


mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/yum.repos.d/
mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/pki/rpm-gpg/
cp -p ~/zonalivre-rpm.repo $RPM_BUILD_ROOT%{_sysconfdir}/yum.repos.d/
cp -p ~/RPM-GPG-KEY-zonalivre-rpm-repo $RPM_BUILD_ROOT%{_sysconfdir}/pki/rpm-gpg/



* Sun Jan 24 2016 Builder <> 1.0.1
- First release.

Create RPM

cd ~/rpmbuild/SPECS/
rpmbuild -v -ba --sign --clean zonalivre.spec

Verify package signature

rpm --checksig ~/rpmbuild/RPMS/noarch/zonalivre-rpm-repo-1-0.1.noarch.rpm 
/root/rpmbuild/RPMS/noarch/zonalivre-rpm-repo-1-0.1.noarch.rpm: rsa sha1 (md5) pgp md5 OK

Create and populate the final repository structure

mkdir -vp ~/{x86_64,noarch}
cp ~/RPM-GPG-KEY-zonalivre-rpm-repo ~/
cp -rv ~/rpmbuild/RPMS/* ~/
for repo in ~/{x86_64,noarch}; do
  createrepo -v --deltas ${repo}/

Sync the repository structure to AWS S3:

s3cmd -P sync ~/ s3://

And finally, our repo can be installed in a client:

yum localinstall


Serve rpm repository from AWS S3



curl -sSL -O && chmod +x docker-1.9.1 && sudo mv docker-1.9.1 /usr/local/bin/docker

Run daemon

sudo /usr/local/bin/docker daemon

Run client

docker info

Docker Images vs Docker Containers


docker images
docker search linux
docker pull ubuntu
docker images

This does not cover how to create your own images


docker run ubuntu echo Hello World
docker ps
docker ps -a
docker logs $IMAGE_NAME

Running a container with a custom name

docker run --help | less
docker run --name HelloWorld ubuntu echo Hello World
docker ps -a
docker inspect HelloWorld | less
docker rm HelloWorld

Running an interactive container

docker run --name ubuntu_shell -t -i  ubuntu bash
docker start ubuntu_shell
docker exec ubuntu_shell ifconfig
docker exec -t -i ubuntu_shell bash
docker ps
docker attach ubuntu_shell
docker ps
docker rm ubuntu_shell

Running a web application with docker

docker run --name web -d -P training/webapp python
# See
docker ps # Note randomly assigned port mapping
docker port web 5000
docker rm -f web
docker run --name web -d -p 5000:5000 training/webapp python
docker port web 5000
curl --noproxy localhost http://localhost:5000/
docker logs web
docker top web
docker rm -f web

Network containers

Check existing docker networks

docker network ls
docker network inspect bridge

Create new network

docker network create -d bridge privatenet
docker network inspect privatenet

Start DB container in privatenet

docker run -d --net=privatenet --name db training/postgres  # See

Start Web container in bridged network

docker run --name web -d -p 5000:5000 training/webapp python
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web

Check IP Addresses of each container

docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web

Check DB container can’t access web container

docker exec -it db bash

docker network connect privatenet web
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db
docker exec -it db bash
ping web
docker rm -f web db

Mounting volumes in containers

Volume characteristics

  • Initialized when a container is created
  • Can be shared and reused among containers
  • Changes to a data volume are not included when an image is updated
  • Data volumes persist even if the container is deleted

Sharing data between container and host

mkdir myfiles
echo Hello > myfiles/hello
docker run -it --name ubuntu -v $PWD/myfiles:/myfiles ubuntu bash
ls -l /myfiles
docker inspect ubuntu | less # search for Mounts

Sharing data between containers



Time individual statements in bash

Sometimes the 1000+ line bash script you inherited at work takes 3 hours to complete and you’re not sure where the time is being spent. StackOverflow has some good answers on this topic:

The simplest one that gives me the most value for the least effort (imo) is:

PS4='+[${SECONDS}s][${BASH_SOURCE}:${LINENO}]: ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'

PS4='+[${SECONDS}s][${BASH_SOURCE}:${LINENO}]: ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'
set -x

echo 1
sleep 1
echo 2
sleep 2
echo 3

This produces the following output:

joao@home:~> ./
+[0s][./]: echo 1
+[0s][./]: sleep 1
+[1s][./]: echo 2
+[1s][./]: sleep 2
+[3s][./]: echo 3

Happy bash profiling !

Time individual statements in bash

AWS SES domain verification

Amazon’s AWS SES service allows you to verify a domain so that you can then send email from any of that domain addresses through your EC2 instances.

The verification is done by adding a TXT record to your DNS server for that domain. The TXT record name looks like: OtNct7ugD0fOgjp70xpNpWj4K0xPcGcUopkcsiby9nE=

This is straightforward most times, however, if you host your dns externally, some dns providers don’t allow underscores in the txt record name. In these cases, your TXT record needs to be written in this format instead: amazonses:OtNct7ugD0fOgjp70xpNpWj4K0xPcGcUopkcsiby9nE=

Happy SES’ing !

AWS SES domain verification

Docker Hello World Example

Some quick notes on how to get up and running with Docker.


On Fedora this turned out to be pretty easy, simply:

dnf install docker

For other systems see the Docker installation manual.

Start docker demon

systemctl status -l docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2015-12-12 22:40:41 GMT; 3min 35s ago
Main PID: 5818 (docker)
CGroup: /system.slice/docker.service
└─5818 /usr/bin/docker daemon --selinux-enabled

Dec 12 22:40:01 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:01.803909914Z&amp;amp;quot; level=error msg=&amp;amp;quot;WARNING: No --storage-opt dm.thinpooldev specified, using loopback; this configuration is strongly discouraged for production use&amp;amp;quot;
Dec 12 22:40:40 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:39.995085846Z&amp;amp;quot; level=warning msg=&amp;amp;quot;Docker could not enable SELinux on the host system&amp;amp;quot;
Dec 12 22:40:40 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:40.059743984Z&amp;amp;quot; level=info msg=&amp;amp;quot;Option DefaultDriver: bridge&amp;amp;quot;
Dec 12 22:40:40 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:40.059788786Z&amp;amp;quot; level=info msg=&amp;amp;quot;Option DefaultNetwork: bridge&amp;amp;quot;
Dec 12 22:40:40 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:40.388329409Z&amp;amp;quot; level=info msg=&amp;amp;quot;Firewalld running: true&amp;amp;quot;
Dec 12 22:40:41 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:41.278276562Z&amp;amp;quot; level=info msg=&amp;amp;quot;Loading containers: start.&amp;amp;quot;
Dec 12 22:40:41 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:41.278585203Z&amp;amp;quot; level=info msg=&amp;amp;quot;Loading containers: done.&amp;amp;quot;
Dec 12 22:40:41 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:41.278606222Z&amp;amp;quot; level=info msg=&amp;amp;quot;Daemon has completed initialization&amp;amp;quot;
Dec 12 22:40:41 asterix docker[5818]: time=&amp;amp;quot;2015-12-12T22:40:41.278627443Z&amp;amp;quot; level=info msg=&amp;amp;quot;Docker daemon&amp;amp;quot; commit=&amp;amp;quot;cb216be/1.8.2&amp;amp;quot; execdriver=native-0.2 graphdriver=devicemapper version=1.8.2-fc22
Dec 12 22:40:41 asterix systemd[1]: Started Docker Application Container Engine.

An interesting warning about no –storage-opt db.thinpooldev specified. This is related to Docker’s storage driver, which the docs explain in good detail.

Run Hello World

First, some info about the docker installation

docker info

It’s better to follow along the official and pretty good step by step documentation on basic docker usage, but the basics are below:

Run a single command

[root@asterix ~]# docker run ubuntu:14.04 /bin/echo 'Hello world'
Hello world

Interactive shell

[root@asterix ~]# docker run -t -i ubuntu:14.04 /bin/bash
root@8aed75e824e2:/# hostname
root@8aed75e824e2:/# exit
[root@asterix ~]#

Restart a container that was previously running

docker ps -a
docker restart $CONTAINER_ID
docker exec -it $CONTAINER_ID /bin/bash

Copy file from container to host


Under the covers

Basic information about the newly created image:

[root@asterix ~]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE 14.04 d55e68e6cc9c 4 days ago 187.9 MB

Local docker containers are stored in /var/lib/docker

[root@asterix ~]# ls -l /var/lib/docker/
total 36
drwx------ 5 root root 4096 Dec 12 23:09 containers
drwx------ 5 root root 4096 Dec 12 22:50 devicemapper
drwx------ 7 root root 4096 Dec 12 22:50 graph
-rw-r--r-- 1 root root 5120 Dec 12 23:09 linkgraph.db
-rw------- 1 root root 114 Dec 12 22:50 repositories-devicemapper
drwx------ 2 root root 4096 Dec 12 22:50 tmp
drwx------ 2 root root 4096 Dec 12 22:49 trust
drwx------ 2 root root 4096 Dec 12 22:40 volumes

And we can see what looks like a ext4 filesystem:

[root@asterix ~]# file /var/lib/docker/devicemapper/devicemapper/data
/var/lib/docker/devicemapper/devicemapper/data: Linux rev 1.0 ext4 filesystem data, UUID=6d11e0b5-f063-4a4f-99f4-36253b12b297 (extents) (large files) (huge files)
Docker Hello World Example

CentOS 7 kickstart file

The minimal CentOS 7 kickstart file I could come up with for my requirements. Enjoy.

lang en_GB.UTF-8
keyboard uk
timezone UTC
auth  --useshadow  --passalgo=sha512
selinux --disabled
firewall --disabled
services --enabled=NetworkManager,sshd
eula --agreed
rootpw --plaintext password
ignoredisk --only-use=sda
bootloader --location=mbr --timeout=0
clearpart --all --initlabel
part swap --asprimary --fstype="swap" --size=1024
part /boot --fstype xfs --size=200
part pv.01 --size=1 --grow
volgroup rootvg01 pv.01
logvol / --fstype xfs --name=lv01 --vgname=rootvg01 --size=1 --grow

repo --name=base --baseurl=
repo --name=updates --baseurl=
url --url=""

%packages --nobase --ignoremissing

%post --log=/root/postinstall.log

# Do the bare minimum so that I can ssh to the box and run a shell script to execute whatever provisioning needs happening

mkdir -p /root/.ssh
chmod 700 /root/.ssh
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6hE75Ox6wDfXVJzXeKdyUBO4o19TtGxboJTI2vR3CE9ZJbODIxSr+tfMZcwmuSF892PiahhVzAA2wJ6LdMtFH6FUIGvjU0i7jIo/x+TmvheH46N9qllo2C2ZlxL/HbpRYIyqEntUYcBQzYBvUwnzoDFgS1GhG4LalYp0U9zlHGOA/Wk7qBjH8Ca1mtPSnxudsb/NwERIjfLbvdX9Fc+vkx6fs3ykJv+p8lPEZkw3kcVAfuyhnXzE7kprSHDuOuQo0FDvCTjy9ISxZPvExKT7bD7vQRlrx9PLzYSWI7/evonWHR8c/jPS8U56ii8YH/rtC/iqo4LiwKFxoxaDdS2wD" > /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys

augtool -s <<EOF
#root login needs to be enabled during initial setup so the project specific scripts can be executed
set /files/etc/ssh/sshd_config/PermitRootLogin yes

#This saves time during vm startup
set /files/etc/grub.conf/timeout 0

#Removed because otherwise user install scripts can't use sudo
rm /files/etc/sudoers/Defaults[requiretty]

CentOS 7 kickstart file