Skip to main content

- Defending Internet Freedom

- IndieHosters

- Give Docker Trainings

- libre.sh, yet another Docker PaaS

- Meteor Freelancer

- ToS;DR

- IndieWeb

- I have something to Hide and you?

- Framasoft

indiehosters.net

github.com/IndiePaaS/IndiePaaS

twitter.com/pierreozoux

github.com/pierreozoux

tosdr.org/

pierre-o.fr

www.humancoders.com/formations/docker

www.meetup.com/Internet-Freedom-Lisbon/

ihavesomethingtohi.de/

framadrive.org

pierre@ozoux.net

Pierre Ozoux

Weekly - 28th of January - StandardCRDs

6 min read

I'll start a series of blog post where I'll share the work I did during the week on Free Software.

This is the opportunity to make visible a work that is not often visible to people. And also a way to test ideas and the way to explain them.

I hope you'll like it, and feedback is really appreciated.

Resume of last week

My current focus is on Standard CRDs, it is a bit the mother of the battles for me at the moment. This is a long shot, and I'd like to get feedback from the kube community.

Standard CRDs

What is a Resource in Kubernetes

Let's step back and define what is Kubernetes. Kubernetes is becoming The cloud API. With it, you can schedule compute, storage and LoadBalancers. You have a nice abstraction to deploy your workload the same way on different cloud vendors or on premises.

This API is based on the Kubernetes Resource Model. I recommend you to read this really nice document explaining the management of these resources. It looks like an academic recipe to build distributed system, but it is already implemented!

Most of the resources are defined upstream. As it is a cloud API, you can find the following resources:

  • pods - compute
  • endpoints, services, ingress - network
  • PersistentVolume, VolumeSnapshot - storage

All you need to make a good cloud API.

The Kubernetes API is also nice to work with as you get, out of the box, the following features:

  • authentication (OpenIdConnect, ldap, there are many plugins)
  • authorization with RBAC
  • versioning of all the objects

What is a CRD

CRD stands for Custom Resource Definition. It is a standard way to extend Kubernetes API. The amazing part is that it is really easy to add new building blocks to Kubernetes API. You deploy your object definition (CRD) then the associated controller. And then you are ready to deploy instances of your objects.

For example, you can create a PostgreSQL object. Then you deploy the associated controller (some people call this an operator). And then, as a user, you can create PostgreSQL instances, just by deploying the configuration of it.

Why do we need a standard

Currently, just for PostgreSQL, I counted 5 different implementations. From my experience, when there is no standards, there is FUD in the market (Fear Uncertainty and doubts). I experienced that 2 times in the container ecosystem.

When CoreOS introduced rkt, suddenly I felt a little halt in the growth of the container ecosystem. Especially in the Enterprise market, these technology shifts are really expensive. And if you are betting on the wrong technology, the consequences can be dramatic. The OpenContainerInitiative was a response to these doubts. The initial problem is that, docker was used by everybody. But the governance of docker was in the hand of one company. And Docker Inc behaved badly with the community. Hence, there was a need for a well defined standard. CoreOS, a company betting on the container adoption understood early that it was a threat to their business. That's why they introduced rkt, to force Docker Inc join the Open Container Initiative. One year later, the market normalized again and everybody was confident that the technology they were using would be indeed useful for the next 10 years.

It happened the same with the Orchestrator. I remember spending months to watch this space. Who will be the winner between DockerSwarm, Kubernetes, Nomad, Marathon. When RedHat joined Google to develop Kubernetes, this was already a signal that this is a good bet. Then Google donated this Project to the Linux Foundation and created the Cloud Native Computing Foundation. This is why Kubernetes started to sky rocket. It was then a standard supported by the Linux foundation.

I really believe in standards, and these are good to enable mass adoption of technologies.

That's why we need to define what is a PostgreSQL instance upstream in Kubernetes. This would enable a greater adoption of this functionality and allow more collaboration between the different implementations.

What is a KEP

The definition from upstream:

A Kubernetes Enhancement Proposal (KEP)
is a way to propose, communicate and coordinate
on new efforts for the Kubernetes project.

Think of it as an RFC in the Internet world.

Standard CRD KEP

So now you can understand why I'm so fascinated by that this week. And the good news is that I found a KEP already opened since many months. I then started to work on an implementation proposal.

You can find the presentation of the kubernetes enhancement proposal.

You can discuss the associated issue.

And here is my concrete proposal to solve this.

Next week objective

Misc

Toot of the week

An interesting discussion about energy consumption and self hosting

Thanks

If you arrived until here, I'd like to thank you for your precious time.

If you could react on what you want more, and what you want less, this would be helpful to make this more interesting for you.

If you found it interesting and you know someone that could be interested, please share around.

Last thing, if you have a question, please ask here, next week, I'll answer one.

Pierre Ozoux

Cambridge Analytica whistleblower

2 min read

Maybe you saw that revelation of facebook breach.

A lot of people are shocked, and it was also for me the drop to quit facebook.

But, it is really nothing new. we accepted it, even if we didn't read the terms.

Surveillance capitalism, is really bad for democracy, and since Snowden revelation, we know that they collaborate with state surveillance, which is nothing to reassure citizens from the world. And as you can see, even in our so called "democracy" the leader can change quickly to somebody you didn't really expect to have all this data in their hand.

I'll make here a little list of articles that prove that it is nothing new, and shocked me before:

Our digital twin maybe reveals more than what we actually know about ourself. And they can manipulate our real person.

Is it not enough power? No there is something even scarier. We are living in a panopticon, always knowing that somebody can watch us. And you might think, who care? I was really surprised to learn that traffic to wikipedia entries about terrorism dropped after Snowden revelations. So yes, we are now afraid to learn more about our world because we are under surveillance!

If you think like me that this is scary, and you have something to hide, then quit and/or donate to Terms of Service - Didn't Read crowdfunding campaign!

Pierre Ozoux

Having a bit of fun with Hetzner free cloud and kubernetes before GOT

1 min read

Following this tutoriel from my GF's ubuntu :)

apt-get install pip
pip install pssh
cat > servers < script.sh <https://download.docker.com/linux/ubuntu/gpg | apt-key add -
cat </etc/apt/sources.list.d/docker.list
deb https://download.docker.com/linux/$(lsb_release -si | tr '[:upper:]' '[:lower:]') $(lsb_release -cs) stable
EOF

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update

apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}') kubelet kubeadm kubectl
EOS

pssh -O StrictHostKeyChecking=no -h servers -i -t 0 -I < ./script.sh

My master will be the 22

run this on this node:


kubeadm init --pod-network-cidr=10.244.0.0/16

Run this on the workers:

kubeadm join ... # the join command you got from the init

And again on the master:

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/canal.yaml

 voila :)

kubectl get no
kubectl get po --all-namespaces

ok, it took 1h30, and not 15min as expected at the beginning :/ So no GOT, but just H2G2, and dodo :)

Pierre Ozoux

Pierre Ozoux

I start to understand the fuss around swarm mode...

1 min read

Ok, I think I'm annoyed (not sure, but almost).

They always said,

"develop against docker api, it will not change."

"Docker run" is just an api call to a server, you can replace the server with a cluster master"

And now, how do you deploy? "docker service create"...

Is all my work around docker-compose compatible? "Yes, we have a crappy and unstable converter!"

Then there is this converter also compose2kube :) I'm sure it is as good, and if I have to switch, I'll switch for the big guys :)

They also have kpm which looks interesting!

And ceph and git support for volumes \o/

In this battle, my position was always to wait with my docker-compose (looking at the less energy consuming path). But @docker, if I have to work to migrate existing compose to whatever, I'll change for the winner!

Go Fork yourself Docker!

https://regmedia.co.uk/2012/06/18/torvalds_bird.jpg

Pierre Ozoux

Recommendations to read before a #docker training :)

2 min read

And for after, to go further:

In term of podcats, listen to:

  • podctl start from the beginning, they'll tell you the basics :)
  • The cloudCast Follow the trends on what is happening around the cloud world.

And finally, in Youtube, I recommend following these channels:

And about kubernetes, I recommend the following:

Julia Evans

And you, what would you recommend?

PS: do you know why we say k8s for kubernetes? or i18n for internationalisation?

First letter - number of letter - Last letter, you're welcome!

localisation -> l10n :)

Pierre Ozoux

12 Fractured Apps

1 min read

I laughed :)

"I can hear the silent cheers from hipster “sysadmins” sipping on a cup of Docker Kool-Aid eagerly waiting to suggest using a custom Docker entrypoint to solve our bootstrapping problems."

https://medium.com/@kelseyhightower/12-fractured-apps-1080c73d481c#.7nbvd5lme

Really good read about how you should build your applications!

Pierre Ozoux

GNU new round of investment!

1 min read

"They got embedded in all the huge enterprise companies on the backs of volunteers! Now they can flip on the revenue stream. I really respect Richard for his cutthroat business strategy."

-Larry Ellison, Oracle

https://diafygi.github.io/gnu-pricing/website/

Pierre Ozoux

Mouting iPhone on ubuntu

1 min read

If you ever need to mount an iPhone on ubuntu, it is a pain, hope it helps:

sudo apt-get install libfuse-dev build-essential automake libusbmuxd-dev libplist-dev libplist++-dev python-dev libssl-dev libtool

wget https://github.com/libimobiledevice/libimobiledevice/archive/master.zip
unzip libimobiledevice-master.zip
cd libimobiledevice-master/
./autogen.sh
make
sudo make install

wget https://github.com/libimobiledevice/ifuse/archive/master.zip
cd ../ifuse-master/
./autogen.sh
make
sudo make install

sudo modprobe fuse
sudo adduser $USER fuse
mkdir /tmp/iphone/
ifuse /tmp/iphone/

Pierre Ozoux

Backups on Rancher/Convoy/GlusterFS

2 min read

The problem

I'm currently working on https://openintegrity.org) and we use Rancher with convoy and glusterFS. So far so nice.

We now need to do backups, because, well, you know, it is always nice to have backups!

Backups have 2 purposes:

  • disaster recovery: one disk burn, and I want to recover my data, or my gluster cluster collapsed
  • go back in time: I just deleted really important data, and I want to recover them.

Convoy offers snapshot features, but no rollback so it is a bit useless to go back in time.

We could use the backup feature, but it would be quicker to restore from snapshot. Anyway, they don't offer it, so we are out of luck.

And actually, convoy-glusterFS doesn't even implement backups nor snapshot option. So we are really out of luck here.

A possible solution

Make a generic process that will run periodically. It would list all the mount points used by local containers. Then for each mount point, it would:

  • create a container
  • mount this in read-only
  • use duplicity to backup it locally (incremental, and encrypted)

And then, to keep these backups in a safe place:

  • expose this folder in read-only to ssh
  • pull backup from another server

What do you think?

A nice enhancement would be to detect if this is a mysql folder (/var/lib/mysql), if yes, perform a mysql dump before doing the incremental backup.

We still have to write a restore procedure, but once I know I have my backups in a duplicity format, I'm a lot more comfortable!