<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.8.5">Jekyll</generator><link href="https://blog.cloudowski.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://blog.cloudowski.com/" rel="alternate" type="text/html" /><updated>2022-08-11T10:12:42+02:00</updated><id>https://blog.cloudowski.com/feed.xml</id><subtitle>DevOps Growth blog</subtitle><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><entry><title type="html">Co przyniesie rok 2022 dla DevOps</title><link href="https://blog.cloudowski.com/pl/co-przyniesie-rok-2022-dla-devops/" rel="alternate" type="text/html" title="Co przyniesie rok 2022 dla DevOps" /><published>2021-12-30T00:00:00+01:00</published><updated>2021-12-30T00:00:00+01:00</updated><id>https://blog.cloudowski.com/pl/co-przyniesie-rok-2022-dla-devops</id><content type="html" xml:base="https://blog.cloudowski.com/pl/co-przyniesie-rok-2022-dla-devops/">&lt;p&gt;Co to był za rok! Chyba nigdy tyle nie obejrzałem seriali na Netflix siedząc w domu, ale też nigdy nie przeczytałem tak wielu książek. A ilu rzeczy się też nauczyłem - dużo o technologii, a jeszcze więcej o sobie. Ile też konferencji się odbyło, które zostały przeniesione do online - niestety moim zdaniem niekorzystnie dla nich. Wyszły 3 wersje Kubernetesa, pojawił się Terraform 1.0 i było też dużo ciekawych, dużych awarii clouda. Oj działo się, nie powiem. Ale najważniejszym dla mnie był początek wydawania regularnego tego newsletter - to była wyśmienita decyzja!&lt;br /&gt;
Postanowiłem tym razem wyciągnąć szklaną kulę i zabawić się we wróżbitę Tomasza :) Przedstawiam Ci moje przewidywania na rok 2022. Skupię się na technologii, DevOps i moich planach.&lt;/p&gt;

&lt;h2 id=&quot;kubernetes-się-starzeje-dojrzewa&quot;&gt;Kubernetes &lt;del&gt;się starzeje&lt;/del&gt; dojrzewa&lt;/h2&gt;

&lt;p&gt;To już nie nowość - Kubernetes to już standard. Projekt wciąż prężnie się rozwija, a jego API dojrzewa. Nie ma spektakularnych nowości i nie będzie ich zbyt dużo również kolejnych wersjach wydawanych w 2022. Taki los dobrych produktów - przestaje być o nich głośno, gdyż stają się częścią naszej codzienności. Podobnie było z Linuksem - kiedyś grupa fascynatów zaczynała go używać również na desktopach, a teraz to codzienność w całym świecie IT.
Jednym z ważnych trendów jest dalsza modularyzacja Kubernetesa i wyodrębnienie poszczególnych komponentów spoza głównego kodu. Dotyczy to głównie aspektów sieciowych (to się już dzieje - pluginy CNI), ale też coraz częściej obsługi wolumenów (drivery CSI zamiast providerów w głównym kodzie). To powinno jeszcze przyspieszyć adopcję Kubernetesa na mniejszych chmurach i na środowiskach on-prem.&lt;/p&gt;

&lt;h2 id=&quot;terraform-standardem&quot;&gt;Terraform standardem&lt;/h2&gt;

&lt;p&gt;W końcu menedżerzy i dyrektorzy mogą spokojnie patrzeć jak ich działy IT wykorzystują Terraform do obsługi serwisów w chmurach. W końcu nie jest to jakaś wersja 0.X, a już 1.0 &lt;a href=&quot;https://www.hashicorp.com/blog/announcing-hashicorp-terraform-1-0-general-availability&quot;&gt;wydana&lt;/a&gt; w tamtym roku. Niektórzy mogą uśmiechnąć się z politowaniem, ale nawet czasem takie małe elementy mogą decydować o decyzji w wyborze narzędzia. To był ogólnie świetny rok dla HashiCorp i od &lt;a href=&quot;https://www.hashicorp.com/blog/a-new-chapter-for-hashicorp&quot;&gt;niedawna&lt;/a&gt; jest spółką publiczną notowaną na nowojorskiej giełdzie. 
Sam Terraform pozostanie jeszcze bardziej wykorzystywanym narzędziem przez ludzi od DevOps. Podobnie jak w przypadku Kubernetes jego adopcji sprzyja modularyzacja. W końcu można go rozszerzać pisząc własnych &lt;a href=&quot;https://www.terraform.io/plugin/framework/providers&quot;&gt;providerów&lt;/a&gt;, umieszczać ja na &lt;a href=&quot;https://www.terraform.io/language/modules/sources&quot;&gt;własnych zasobach&lt;/a&gt;, a do tego używać mnóstwo gotowych &lt;a href=&quot;https://registry.terraform.io/browse/modules&quot;&gt;modułów&lt;/a&gt; lub napisać własne. Prognozuję dalszy wzrost użycia Terraform i większą ilość providerów. Nie wyobrażam sobie poważnego środowiska, które w roku 2022 wyklikuje się z GUI zamiast z kodu.&lt;/p&gt;

&lt;h2 id=&quot;everything-as-code&quot;&gt;Everything as Code&lt;/h2&gt;

&lt;p&gt;Idąc za Kubernetes i Terraform, łatwo przewidzieć dokąd zmierzamy - to koncepcja Everything as Code. Nie tylko Infrastructure as Code, ale wszystko będzie opisywane kodem. To będzie przekładać się na potrzebę jeszcze większej automatyzacji. Będzie jeszcze więcej &lt;a href=&quot;https://artifacthub.io/packages/search?kind=3&amp;amp;sort=relevance&amp;amp;page=1&quot;&gt;operatorów&lt;/a&gt; dla platform opartych o Kubernetes, będzie coraz więcej usług u dostawców chmur publicznych, więcej providerów Terraform umożliwi ich obsługę, a aplikacje będą w całości dostarczane przez pipeline’y CI/CD. Coraz więcej wiedzy i praktyk dotyczących infrastruktury będzie ukryta w oprogramowaniu operowanym przez API. Będzie tu królować Go jako język najłatwiej integrujący się z platformami (m.in. dzięki natywnym klientom dla Kubernetes, Terraform, różnych platform chmurowych i serverless) i świetnie działającymi w kontenerach (statyczne binaria i jednoplikowe obrazy kontenerów).
W dużej mierze to się już dzieje w nowoczesnych organizacjach, a reszta po prostu będzie nadganiać.&lt;/p&gt;

&lt;h2 id=&quot;bezpieczeństwo-priorytetem&quot;&gt;Bezpieczeństwo priorytetem&lt;/h2&gt;

&lt;p&gt;Chyba niewiele poważnych organizacji rzuca się na nowe technologie bez przeanalizowania tego pod względem wpływu na bezpieczeństwo. Być może dlatego tak wiele z nich boi się zmian i wprowadzania kontenerów czy Kubernetesa. Wydaje się im, że lepiej jest pozostać przy czymś znanym. To jeszcze można zrozumieć jeśli dana technologia jest odpowiednio łatana, ale coraz częściej producenci oprogramowania życzą sobie taczki pieniędzy za utrzymywanie przy życiu takich produktów.
Bezpieczne platformy można budować zarówno korzystając z maszyn wirtualnych, chmur publicznych czy prywatnych, ale coraz częściej to właśnie Kubernetes odpala skonteneryzowane aplikacje. I niezależnie od wyboru zawsze pozostaje temat bezpieczeństwa. W roku 2022 jeszcze częściej będzie słychać pojęcie DevSecOps, czyli zagnieżdżenia mechanizmów kontroli bezpieczeństwa na jak najwcześniejszym etapie (czyli &lt;a href=&quot;https://www.aquasec.com/cloud-native-academy/devsecops/shift-left-devops/&quot;&gt;Shift Left&lt;/a&gt;). Nie wyobrażam sobie, aby bezpieczeństwo było procesem opisanym w dokumentach, a nie w kodzie. Narzędzia takie jak &lt;a href=&quot;https://github.com/open-policy-agent/gatekeeper&quot;&gt;OPA&lt;/a&gt;, &lt;a href=&quot;https://kyverno.io/&quot;&gt;Kyverno&lt;/a&gt; czy &lt;a href=&quot;https://www.vaultproject.io/&quot;&gt;Vault&lt;/a&gt; będą jeszcze bardziej potrzebne.&lt;/p&gt;

&lt;h2 id=&quot;praca-zdalna-lub-przynajmniej-hybrydowa&quot;&gt;Praca zdalna (lub przynajmniej hybrydowa)&lt;/h2&gt;

&lt;p&gt;Ostatnie dwa lata pokazały, że praca zdalna jest możliwa i nie wpływa ona negatywnie na efektywność organizacji. To świetna wiadomość dla tych spędzających wiele godzin na dojazdach i mieszkających z dala od zgiełku miast. Będą mogli oni teraz swobodniej wybierać pracodawców i uczestniczyć w ciekawych projektach bez konieczności przeprowadzki do innego miasta lub kraju. To już wprowadziło niezłe zamieszanie na rynku pracy otwierając możliwości dla DevOpsów i nie tylko. 
Dla tych jak ja, którzy czasem potrzebują kontaktu na żywo, pozostanie model hybrydowy. Ja osobiście wierzę, że jest on nam potrzebny do lepszej komunikacji. I mogą to być spotkania raz na tydzień lub nawet raz na miesiąc. Moje doświadczenie pokazuje mi, że lepiej mi się rozmawia na żywo - wyraźniej odbieram sygnały niewerbalne, mimikę twarzy rozmówcy co pozwala mi lepiej zrozumieć drugą stronę.
Z drugiej strony z czasem myślę, że szczęście w tym pandemicznym nieszczęściu polega na tym, że COVID przyszedł w czasach, gdy technologia zmniejsza jego wpływ na nasze życie.&lt;/p&gt;

&lt;h2 id=&quot;większe-zapotrzebowanie-na-devops&quot;&gt;Większe zapotrzebowanie na DevOps&lt;/h2&gt;

&lt;p&gt;To jeszcze a propos pracy zdalnej i pandemii. Otóż okazało się, że ci którzy byli przygotowani na pracę zdalną nie tylko w firmach, ale urzędach, radzą sobie lepiej. Istnieją jednak kraje i organizacje, gdzie postęp nie był wystarczająco szybki. Pandemia zaś pokazała braki w przygotowaniu urzędów i organizacji, które teraz muszą szybko te niedostatki transformacji cyfrowej nadrobić. I nawet jak są już gotowe rozwiązania, oprogramowanie wytworzone i używane przez podobne jednostki, to wciąż trzeba to wszystko ze sobą spiąć.&lt;br /&gt;
I tu wracamy do roli DevOps w tym wszystkim. To dzięki zdolnym inżynierom potrafiącym wykorzystać chmurę, kontenery, różnego rodzaju API i tradycyjną infrastrukturę (często chmura jest po prostu poza zasięgiem), taka przyspieszona transformacja ma szanse na powodzenie.
Zatem głowa do góry (ewentualnie do książek i edukacji), a ręce na klawiatury - czas zakasać rękawy i wprowadzić pozostałych w XXI wiek!&lt;/p&gt;

&lt;h2 id=&quot;cloudowski-urośnie&quot;&gt;Cloudowski urośnie&lt;/h2&gt;

&lt;p&gt;I to nie tylko przez zbyt dużą ilość sernika, który pochłonąłem w ostatnie święta! Mówię tutaj o moich ambitnych planach, które sobie postawiłem na rok 2022. Nie mogę zdradzić ich wszystkich, ale uchylę przed Tobą rąbka tajemnicy. Oto co się wydarzy u mnie:&lt;/p&gt;

&lt;p&gt;🎙 Zacznę publikować mój własny podcast o DevOps - pierwsze odcinki już wkrótce!
📸 Odsłonię kulisy mojej pracy - założyłem konto na instagramie!
📰 Będę tworzył więcej treści po polsku - więcej artykułów i więcej ciekawych filmów na YouTube
📚 Będę nagrywał i prowadził kolejne warsztaty z tematyki DevOps, Kubernetes, Terraform i Cloud
⎈ Wydam odświeżoną wersję mojego kursu “Kubernetes po polsku”
💻 Poprowadzę tradycyjne szkolenia, aby pomóc innym zacząć efektywniej używać narzędzi i procesów DevOps (na tą chwilę mam już zajęte kilka pierwszych miesięcy)&lt;/p&gt;

&lt;p&gt;To tylko wycinek tego co mam w planach. Zapowiada się bardzo pracowity rok.&lt;/p&gt;

&lt;p&gt;A czy ty już masz spisane swoje plany? Z mojego doświadczenia podpowiem, że tylko te spisane ręcznie mają większe szanse realizacji. I najlepiej noś ze sobą tą listę lub powieś w widocznym miejscu. Nic nie działa tak dobrze jak koncentracja na rzeczach dla Ciebie ważnych.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="devops" /><category term="terraform" /><category term="kubernetes" /><category term="devsecops" /><summary type="html">Co to był za rok! Chyba nigdy tyle nie obejrzałem seriali na Netflix siedząc w domu, ale też nigdy nie przeczytałem tak wielu książek. A ilu rzeczy się też nauczyłem - dużo o technologii, a jeszcze więcej o sobie. Ile też konferencji się odbyło, które zostały przeniesione do online - niestety moim zdaniem niekorzystnie dla nich. Wyszły 3 wersje Kubernetesa, pojawił się Terraform 1.0 i było też dużo ciekawych, dużych awarii clouda. Oj działo się, nie powiem. Ale najważniejszym dla mnie był początek wydawania regularnego tego newsletter - to była wyśmienita decyzja! Postanowiłem tym razem wyciągnąć szklaną kulę i zabawić się we wróżbitę Tomasza :) Przedstawiam Ci moje przewidywania na rok 2022. Skupię się na technologii, DevOps i moich planach. Kubernetes się starzeje dojrzewa To już nie nowość - Kubernetes to już standard. Projekt wciąż prężnie się rozwija, a jego API dojrzewa. Nie ma spektakularnych nowości i nie będzie ich zbyt dużo również kolejnych wersjach wydawanych w 2022. Taki los dobrych produktów - przestaje być o nich głośno, gdyż stają się częścią naszej codzienności. Podobnie było z Linuksem - kiedyś grupa fascynatów zaczynała go używać również na desktopach, a teraz to codzienność w całym świecie IT. Jednym z ważnych trendów jest dalsza modularyzacja Kubernetesa i wyodrębnienie poszczególnych komponentów spoza głównego kodu. Dotyczy to głównie aspektów sieciowych (to się już dzieje - pluginy CNI), ale też coraz częściej obsługi wolumenów (drivery CSI zamiast providerów w głównym kodzie). To powinno jeszcze przyspieszyć adopcję Kubernetesa na mniejszych chmurach i na środowiskach on-prem. Terraform standardem W końcu menedżerzy i dyrektorzy mogą spokojnie patrzeć jak ich działy IT wykorzystują Terraform do obsługi serwisów w chmurach. W końcu nie jest to jakaś wersja 0.X, a już 1.0 wydana w tamtym roku. Niektórzy mogą uśmiechnąć się z politowaniem, ale nawet czasem takie małe elementy mogą decydować o decyzji w wyborze narzędzia. To był ogólnie świetny rok dla HashiCorp i od niedawna jest spółką publiczną notowaną na nowojorskiej giełdzie. Sam Terraform pozostanie jeszcze bardziej wykorzystywanym narzędziem przez ludzi od DevOps. Podobnie jak w przypadku Kubernetes jego adopcji sprzyja modularyzacja. W końcu można go rozszerzać pisząc własnych providerów, umieszczać ja na własnych zasobach, a do tego używać mnóstwo gotowych modułów lub napisać własne. Prognozuję dalszy wzrost użycia Terraform i większą ilość providerów. Nie wyobrażam sobie poważnego środowiska, które w roku 2022 wyklikuje się z GUI zamiast z kodu. Everything as Code Idąc za Kubernetes i Terraform, łatwo przewidzieć dokąd zmierzamy - to koncepcja Everything as Code. Nie tylko Infrastructure as Code, ale wszystko będzie opisywane kodem. To będzie przekładać się na potrzebę jeszcze większej automatyzacji. Będzie jeszcze więcej operatorów dla platform opartych o Kubernetes, będzie coraz więcej usług u dostawców chmur publicznych, więcej providerów Terraform umożliwi ich obsługę, a aplikacje będą w całości dostarczane przez pipeline’y CI/CD. Coraz więcej wiedzy i praktyk dotyczących infrastruktury będzie ukryta w oprogramowaniu operowanym przez API. Będzie tu królować Go jako język najłatwiej integrujący się z platformami (m.in. dzięki natywnym klientom dla Kubernetes, Terraform, różnych platform chmurowych i serverless) i świetnie działającymi w kontenerach (statyczne binaria i jednoplikowe obrazy kontenerów). W dużej mierze to się już dzieje w nowoczesnych organizacjach, a reszta po prostu będzie nadganiać. Bezpieczeństwo priorytetem Chyba niewiele poważnych organizacji rzuca się na nowe technologie bez przeanalizowania tego pod względem wpływu na bezpieczeństwo. Być może dlatego tak wiele z nich boi się zmian i wprowadzania kontenerów czy Kubernetesa. Wydaje się im, że lepiej jest pozostać przy czymś znanym. To jeszcze można zrozumieć jeśli dana technologia jest odpowiednio łatana, ale coraz częściej producenci oprogramowania życzą sobie taczki pieniędzy za utrzymywanie przy życiu takich produktów. Bezpieczne platformy można budować zarówno korzystając z maszyn wirtualnych, chmur publicznych czy prywatnych, ale coraz częściej to właśnie Kubernetes odpala skonteneryzowane aplikacje. I niezależnie od wyboru zawsze pozostaje temat bezpieczeństwa. W roku 2022 jeszcze częściej będzie słychać pojęcie DevSecOps, czyli zagnieżdżenia mechanizmów kontroli bezpieczeństwa na jak najwcześniejszym etapie (czyli Shift Left). Nie wyobrażam sobie, aby bezpieczeństwo było procesem opisanym w dokumentach, a nie w kodzie. Narzędzia takie jak OPA, Kyverno czy Vault będą jeszcze bardziej potrzebne. Praca zdalna (lub przynajmniej hybrydowa) Ostatnie dwa lata pokazały, że praca zdalna jest możliwa i nie wpływa ona negatywnie na efektywność organizacji. To świetna wiadomość dla tych spędzających wiele godzin na dojazdach i mieszkających z dala od zgiełku miast. Będą mogli oni teraz swobodniej wybierać pracodawców i uczestniczyć w ciekawych projektach bez konieczności przeprowadzki do innego miasta lub kraju. To już wprowadziło niezłe zamieszanie na rynku pracy otwierając możliwości dla DevOpsów i nie tylko. Dla tych jak ja, którzy czasem potrzebują kontaktu na żywo, pozostanie model hybrydowy. Ja osobiście wierzę, że jest on nam potrzebny do lepszej komunikacji. I mogą to być spotkania raz na tydzień lub nawet raz na miesiąc. Moje doświadczenie pokazuje mi, że lepiej mi się rozmawia na żywo - wyraźniej odbieram sygnały niewerbalne, mimikę twarzy rozmówcy co pozwala mi lepiej zrozumieć drugą stronę. Z drugiej strony z czasem myślę, że szczęście w tym pandemicznym nieszczęściu polega na tym, że COVID przyszedł w czasach, gdy technologia zmniejsza jego wpływ na nasze życie. Większe zapotrzebowanie na DevOps To jeszcze a propos pracy zdalnej i pandemii. Otóż okazało się, że ci którzy byli przygotowani na pracę zdalną nie tylko w firmach, ale urzędach, radzą sobie lepiej. Istnieją jednak kraje i organizacje, gdzie postęp nie był wystarczająco szybki. Pandemia zaś pokazała braki w przygotowaniu urzędów i organizacji, które teraz muszą szybko te niedostatki transformacji cyfrowej nadrobić. I nawet jak są już gotowe rozwiązania, oprogramowanie wytworzone i używane przez podobne jednostki, to wciąż trzeba to wszystko ze sobą spiąć. I tu wracamy do roli DevOps w tym wszystkim. To dzięki zdolnym inżynierom potrafiącym wykorzystać chmurę, kontenery, różnego rodzaju API i tradycyjną infrastrukturę (często chmura jest po prostu poza zasięgiem), taka przyspieszona transformacja ma szanse na powodzenie. Zatem głowa do góry (ewentualnie do książek i edukacji), a ręce na klawiatury - czas zakasać rękawy i wprowadzić pozostałych w XXI wiek! Cloudowski urośnie I to nie tylko przez zbyt dużą ilość sernika, który pochłonąłem w ostatnie święta! Mówię tutaj o moich ambitnych planach, które sobie postawiłem na rok 2022. Nie mogę zdradzić ich wszystkich, ale uchylę przed Tobą rąbka tajemnicy. Oto co się wydarzy u mnie: 🎙 Zacznę publikować mój własny podcast o DevOps - pierwsze odcinki już wkrótce! 📸 Odsłonię kulisy mojej pracy - założyłem konto na instagramie! 📰 Będę tworzył więcej treści po polsku - więcej artykułów i więcej ciekawych filmów na YouTube 📚 Będę nagrywał i prowadził kolejne warsztaty z tematyki DevOps, Kubernetes, Terraform i Cloud ⎈ Wydam odświeżoną wersję mojego kursu “Kubernetes po polsku” 💻 Poprowadzę tradycyjne szkolenia, aby pomóc innym zacząć efektywniej używać narzędzi i procesów DevOps (na tą chwilę mam już zajęte kilka pierwszych miesięcy) To tylko wycinek tego co mam w planach. Zapowiada się bardzo pracowity rok. A czy ty już masz spisane swoje plany? Z mojego doświadczenia podpowiem, że tylko te spisane ręcznie mają większe szanse realizacji. I najlepiej noś ze sobą tą listę lub powieś w widocznym miejscu. Nic nie działa tak dobrze jak koncentracja na rzeczach dla Ciebie ważnych.</summary></entry><entry><title type="html">Kubernetes for mere mortals</title><link href="https://blog.cloudowski.com/articles/kubernetes-for-mere-mortals/" rel="alternate" type="text/html" title="Kubernetes for mere mortals" /><published>2021-08-25T00:00:00+02:00</published><updated>2021-08-25T00:00:00+02:00</updated><id>https://blog.cloudowski.com/articles/kubernetes-for-mere-mortals</id><content type="html" xml:base="https://blog.cloudowski.com/articles/kubernetes-for-mere-mortals/">&lt;p&gt;There have been few technologies that have changed the landscape of business and impacted all our daily lives. Of course, the internet is the technology that had the biggest impact, but there are a few more that also influenced various fields of business, especially that of IT. One of these technologies is Kubernetes which has changed the way we build modern environments and create software that runs on them. Although Kubernetes turns &lt;a href=&quot;https://en.wikipedia.org/wiki/Kubernetes#History&quot;&gt;6 years&lt;/a&gt; old this year, you can find articles and blog posts focused mostly on technological features.&lt;br /&gt;
This time I want to help you to understand how Kubernetes might affect the business for non-technical people. Many organizations have already embraced this new Cloud Native approach and have been using it to speed up innovation and have probably already found their own answers to the questions below. I believe that using containers and Kubernetes has a big impact on not only the technical part of organizations but also their culture by enabling people to deliver their software faster, more efficiently and securely.&lt;/p&gt;

&lt;h2 id=&quot;1-what-is-kubernetes-and-how-does-it-work&quot;&gt;1. What is Kubernetes and how does it work?&lt;/h2&gt;

&lt;p&gt;It all started with containers that are used to create packages with software and everything that is required to run a particular application. These containers are like a new type of robot that can be easily replicated and built, replaced quickly when they break or misbehave, and are one-purpose entities built for a dedicated task. They are different from virtual machines that are more like an old type of robot that is multi-purpose and thus heavier, harder to build, and require lots of time-consuming maintenance.&lt;br /&gt;
Containers work best on a platform that can host multiple instances of them and provide additional services. Kubernetes is an open source project which is the best platform for these containers and has outclassed the alternative solutions (i.e. Docker Swarm, &lt;a href=&quot;https://lists.apache.org/thread.html/rab2a820507f7c846e54a847398ab20f47698ec5bce0c8e182bfe51ba%40%3Cdev.mesos.apache.org%3E&quot;&gt;Apache Mesos&lt;/a&gt;, HashiCorp Nomad). It’s like a special hotel for these robots where they get to communicate with each other and the outside world, store and use the data they need to operate, and are provided with special care from the hotel staff. The key point here is that everything is taken care of by Kubernetes, which is like a hotel manager. The main task of a Kubernetes user is to issue proper requests in the form of declarative statements. These requests are standardized and every Kubernetes cluster has a catalog of available requests that it handles. And this is where the main strength of Kubernetes lies - this catalog can be easily extended with custom actions. So in terms of this virtual hotel, it’s like adding additional amenities to provide better services for the hotel’s customers.&lt;br /&gt;
So yes - Kubernetes is like a highly automated and standardized hotel for your applications and there are many other interesting aspects due to which it has become so popular.&lt;/p&gt;

&lt;h2 id=&quot;2-what-are-the-real-benefits-of-using-kubernetes&quot;&gt;2. What are the real benefits of using Kubernetes?&lt;/h2&gt;

&lt;p&gt;Kubernetes brings unification and sets standards for organizations that develop software and deploy them on cloud or on-premises. Using Kubernetes simplifies the deployment process and what’s more important, it speeds up the process significantly. This allows organizations to provide new features or even new services much quicker than even in the cloud. With this unified approach it is easy to use multiple cloud providers and also create hybrid solutions. It helps to avoid vendor lock-ins as well. 
From the operational point of view, Kubernetes brings even more to the table - it increases reliability and allows you to scale your environments easily and quickly.&lt;br /&gt;
So to sum it up here’s the list of benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;broad unification&lt;/strong&gt; - the same deployment approach for multiple types of workloads&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;real portability&lt;/strong&gt; - run applications on desktop as well on multiple cloud platforms or on on-prem environments using the same tools&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;rapid scalability&lt;/strong&gt; - grow your environments quickly to make your platform responsive at all times&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;increased reliability&lt;/strong&gt; - leverage the self-healing feature of the applications and the infrastructure they run on to provide constant access to your products for your customers
&lt;strong&gt;accelerated growth&lt;/strong&gt; - innovate faster, deliver new features and fixes to stay competitive in the ever-growing global market&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;3-will-people-in-my-organization-know-how-to-use-it&quot;&gt;3. Will people in my organization know how to use it?&lt;/h2&gt;

&lt;p&gt;You may be surprised to learn how many people in your organization use containers, maybe even Kubernetes, or at least know it. It’s been a hot topic over the last few years and it’s been a headliner at every major IT conference. The people responsible for research and development have known it for years and there’s a chance that even some proof-of-concept projects have already been initialized in your organization. It’s just hard to miss this popular trend and it’s unwise to ignore it.&lt;br /&gt;
Software vendors have also noticed and fully embraced Kubernetes as a core platform for their products. They deliver them in the form of container images and additional configurations that allow running the software almost immediately on any Kubernetes environment. This means that sooner or later your organization will need to embrace Kubernetes as well to keep up with the inevitable changes enforced either internally or externally.&lt;/p&gt;

&lt;h2 id=&quot;4-can-i-just-wait-for-something-better&quot;&gt;4. Can I just wait for something better?&lt;/h2&gt;

&lt;p&gt;If it was 2016 or even 2017 then there could be some doubt  as to whether Kubernetes is the solution worth investing time and resources in. However, since then Kubernetes has gained a dominant position and it’s the de facto standard for all modern environments.&lt;br /&gt;
For those using cloud services, this might still pose some questions, especially if the environments built on top of the public cloud have been designed properly. For the rest using on-prem hardware, there’s no time to wait, as the list of benefits that a platform built on Kubernetes brings is just too tempting to ignore. Personally I think there’s no better way these days to build a platform that is reliable, fast and scalable on your own hardware.&lt;/p&gt;

&lt;h2 id=&quot;5-is-kubernetes-secure&quot;&gt;5. Is Kubernetes secure?&lt;/h2&gt;

&lt;p&gt;The simplest answer is: it surely can be more secure than other systems. Kubernetes is another software project that has had flaws and security vulnerabilities and probably more of them will be discovered in the future. The emergence of these flaws is caused mostly by Kubernetes’ complexity and the fact that it’s a universal platform that includes features for a broad number of use cases.&lt;br /&gt;
There’s another factor that might increase the overall security of a platform based on Kubernetes - it’s the amount of time it takes to fix the vulnerabilities found in Kubernetes as well as in containers running on it. Everything in Kubernetes is based on containers that are very easy to fix by replacing them almost seamlessly without too much effort. Containers are also smaller and are built for one purpose, which makes them less vulnerable to various attacks.&lt;br /&gt;
It’s not about how secure and free of vulnerabilities the platform components are - it’s more about how fast they can be fixed and Kubernetes makes it as fast and easy as ever.&lt;/p&gt;

&lt;h2 id=&quot;6-can-i-use-kubernetes-with-my-hardware-or-just-in-the-cloud&quot;&gt;6. Can I use Kubernetes with my hardware or just in the cloud?&lt;/h2&gt;

&lt;p&gt;It’s definitely easier to use Kubernetes in the cloud since it often takes a few minutes to create a basic cluster that is ready to use. However, using it for on-prem environments is an excellent idea for the following reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It is cheap to build a cluster for bigger projects (i.e. requiring a lot of resources and servers)&lt;/li&gt;
  &lt;li&gt;It allows utilization of existing hardware, even if not necessarily or high-class or enterprise level, as Kubernetes can mitigate potential failures quite well&lt;/li&gt;
  &lt;li&gt;With unified api it’s also the easiest way to create hybrid solutions (i.e. multi-cloud, multi-region, multi-datacenter)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are additional questions that need to be answered. First is whether to build or buy - in &lt;a href=&quot;/articles/which-kubernetes-for-on-prem/&quot;&gt;this&lt;/a&gt; article I’ve given my observations on this topic. If building is the preferred choice then &lt;a href=&quot;/articles/a-recipe-for-on-prem-kubernetes/&quot;&gt;this&lt;/a&gt; is a list of things that should be considered during the process.&lt;/p&gt;

&lt;h2 id=&quot;7-how-is-it-different-from-cloud&quot;&gt;7. How is it different from cloud?&lt;/h2&gt;

&lt;p&gt;Public cloud platforms can really help boost innovation, for some cases lower the TCO, leverage new technologies (e.g. Artificial Intelligence), but many struggles to implement efficient processes for the software they develop. Kubernetes uses public cloud infrastructure to provide environments dedicated for applications and it enforces a new way of how applications are developed and deployed. This set of new practices is often referred to as Cloud Native. And it is doable without Kubernetes, but it’s much simpler with it. 
Kubernetes can be treated as a cloud platform and it can be especially useful for on-prem environments where the process of building a private cloud didn’t bring expected benefits (i.e. mostly decreasing the lead time for applications).&lt;/p&gt;

&lt;h2 id=&quot;8-how-can-i-start-using-kubernetes-in-my-organization&quot;&gt;8. How can I start using Kubernetes in my organization?&lt;/h2&gt;

&lt;p&gt;Kubernetes uses containers so you need to have some application delivered inside a container image. You can start with a new project or pick a one that is fairly recently developed and uses modern frameworks (e.g. java with spring boot, nodejs, golang, python).&lt;/p&gt;

&lt;p&gt;Then just start deploying it on a single node cluster running on a laptop such as &lt;a href=&quot;https://minikube.sigs.k8s.io/docs/start/&quot;&gt;Minikube&lt;/a&gt; or &lt;a href=&quot;https://microk8s.io/&quot;&gt;Microk8s&lt;/a&gt;. You don’t need any cloud or any big cluster. The beauty of Kubernetes lays in its portability - if it works on your workstation it will also work on &lt;strong&gt;any&lt;/strong&gt; other Kubernetes.&lt;/p&gt;

&lt;p&gt;When your application is ready you need to choose your production environment. If you are able to use the public cloud then choose a provider that you have the most experience with. If an on-premises environment is all you can use then your journey probably starts - have a look at my comparison between &lt;a href=&quot;/articles/10-differences-between-openshift-and-kubernetes/&quot;&gt;Kubernetes and OpenShift&lt;/a&gt;, as the latter one is a dominant product for such environments. If you wish to build your own platform using open source components then have a look at my &lt;a href=&quot;/articles/a-recipe-for-on-prem-kubernetes/&quot;&gt;tips on building&lt;/a&gt; a bespoke Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;At the same time start educating people in your organization about the benefits of using Kubernetes platform for running your software. Like every major change this also won’t happen quickly and you can expect resistance from people believing it’s a superfluous technology. Your job is to create a &lt;em&gt;Cloud Native Center of Excellence&lt;/em&gt; and start dispelling the doubts arising around the topic of containers, Kubernetes and a new approach towards development and deployment. However, adoption of Kubernetes &lt;a href=&quot;https://www.redhat.com/en/resources/kubernetes-adoption-security-market-trends-2021-overview&quot;&gt;has grown&lt;/a&gt; significantly over the years and currently is one of the best option to provide highly-available platform and start delivering your applications faster.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="kubernetes" /><summary type="html">There have been few technologies that have changed the landscape of business and impacted all our daily lives. Of course, the internet is the technology that had the biggest impact, but there are a few more that also influenced various fields of business, especially that of IT. One of these technologies is Kubernetes which has changed the way we build modern environments and create software that runs on them. Although Kubernetes turns 6 years old this year, you can find articles and blog posts focused mostly on technological features. This time I want to help you to understand how Kubernetes might affect the business for non-technical people. Many organizations have already embraced this new Cloud Native approach and have been using it to speed up innovation and have probably already found their own answers to the questions below. I believe that using containers and Kubernetes has a big impact on not only the technical part of organizations but also their culture by enabling people to deliver their software faster, more efficiently and securely. 1. What is Kubernetes and how does it work? It all started with containers that are used to create packages with software and everything that is required to run a particular application. These containers are like a new type of robot that can be easily replicated and built, replaced quickly when they break or misbehave, and are one-purpose entities built for a dedicated task. They are different from virtual machines that are more like an old type of robot that is multi-purpose and thus heavier, harder to build, and require lots of time-consuming maintenance. Containers work best on a platform that can host multiple instances of them and provide additional services. Kubernetes is an open source project which is the best platform for these containers and has outclassed the alternative solutions (i.e. Docker Swarm, Apache Mesos, HashiCorp Nomad). It’s like a special hotel for these robots where they get to communicate with each other and the outside world, store and use the data they need to operate, and are provided with special care from the hotel staff. The key point here is that everything is taken care of by Kubernetes, which is like a hotel manager. The main task of a Kubernetes user is to issue proper requests in the form of declarative statements. These requests are standardized and every Kubernetes cluster has a catalog of available requests that it handles. And this is where the main strength of Kubernetes lies - this catalog can be easily extended with custom actions. So in terms of this virtual hotel, it’s like adding additional amenities to provide better services for the hotel’s customers. So yes - Kubernetes is like a highly automated and standardized hotel for your applications and there are many other interesting aspects due to which it has become so popular. 2. What are the real benefits of using Kubernetes? Kubernetes brings unification and sets standards for organizations that develop software and deploy them on cloud or on-premises. Using Kubernetes simplifies the deployment process and what’s more important, it speeds up the process significantly. This allows organizations to provide new features or even new services much quicker than even in the cloud. With this unified approach it is easy to use multiple cloud providers and also create hybrid solutions. It helps to avoid vendor lock-ins as well. From the operational point of view, Kubernetes brings even more to the table - it increases reliability and allows you to scale your environments easily and quickly. So to sum it up here’s the list of benefits: broad unification - the same deployment approach for multiple types of workloads real portability - run applications on desktop as well on multiple cloud platforms or on on-prem environments using the same tools rapid scalability - grow your environments quickly to make your platform responsive at all times increased reliability - leverage the self-healing feature of the applications and the infrastructure they run on to provide constant access to your products for your customers accelerated growth - innovate faster, deliver new features and fixes to stay competitive in the ever-growing global market 3. Will people in my organization know how to use it? You may be surprised to learn how many people in your organization use containers, maybe even Kubernetes, or at least know it. It’s been a hot topic over the last few years and it’s been a headliner at every major IT conference. The people responsible for research and development have known it for years and there’s a chance that even some proof-of-concept projects have already been initialized in your organization. It’s just hard to miss this popular trend and it’s unwise to ignore it. Software vendors have also noticed and fully embraced Kubernetes as a core platform for their products. They deliver them in the form of container images and additional configurations that allow running the software almost immediately on any Kubernetes environment. This means that sooner or later your organization will need to embrace Kubernetes as well to keep up with the inevitable changes enforced either internally or externally. 4. Can I just wait for something better? If it was 2016 or even 2017 then there could be some doubt as to whether Kubernetes is the solution worth investing time and resources in. However, since then Kubernetes has gained a dominant position and it’s the de facto standard for all modern environments. For those using cloud services, this might still pose some questions, especially if the environments built on top of the public cloud have been designed properly. For the rest using on-prem hardware, there’s no time to wait, as the list of benefits that a platform built on Kubernetes brings is just too tempting to ignore. Personally I think there’s no better way these days to build a platform that is reliable, fast and scalable on your own hardware. 5. Is Kubernetes secure? The simplest answer is: it surely can be more secure than other systems. Kubernetes is another software project that has had flaws and security vulnerabilities and probably more of them will be discovered in the future. The emergence of these flaws is caused mostly by Kubernetes’ complexity and the fact that it’s a universal platform that includes features for a broad number of use cases. There’s another factor that might increase the overall security of a platform based on Kubernetes - it’s the amount of time it takes to fix the vulnerabilities found in Kubernetes as well as in containers running on it. Everything in Kubernetes is based on containers that are very easy to fix by replacing them almost seamlessly without too much effort. Containers are also smaller and are built for one purpose, which makes them less vulnerable to various attacks. It’s not about how secure and free of vulnerabilities the platform components are - it’s more about how fast they can be fixed and Kubernetes makes it as fast and easy as ever. 6. Can I use Kubernetes with my hardware or just in the cloud? It’s definitely easier to use Kubernetes in the cloud since it often takes a few minutes to create a basic cluster that is ready to use. However, using it for on-prem environments is an excellent idea for the following reasons: It is cheap to build a cluster for bigger projects (i.e. requiring a lot of resources and servers) It allows utilization of existing hardware, even if not necessarily or high-class or enterprise level, as Kubernetes can mitigate potential failures quite well With unified api it’s also the easiest way to create hybrid solutions (i.e. multi-cloud, multi-region, multi-datacenter) There are additional questions that need to be answered. First is whether to build or buy - in this article I’ve given my observations on this topic. If building is the preferred choice then this is a list of things that should be considered during the process. 7. How is it different from cloud? Public cloud platforms can really help boost innovation, for some cases lower the TCO, leverage new technologies (e.g. Artificial Intelligence), but many struggles to implement efficient processes for the software they develop. Kubernetes uses public cloud infrastructure to provide environments dedicated for applications and it enforces a new way of how applications are developed and deployed. This set of new practices is often referred to as Cloud Native. And it is doable without Kubernetes, but it’s much simpler with it. Kubernetes can be treated as a cloud platform and it can be especially useful for on-prem environments where the process of building a private cloud didn’t bring expected benefits (i.e. mostly decreasing the lead time for applications). 8. How can I start using Kubernetes in my organization? Kubernetes uses containers so you need to have some application delivered inside a container image. You can start with a new project or pick a one that is fairly recently developed and uses modern frameworks (e.g. java with spring boot, nodejs, golang, python). Then just start deploying it on a single node cluster running on a laptop such as Minikube or Microk8s. You don’t need any cloud or any big cluster. The beauty of Kubernetes lays in its portability - if it works on your workstation it will also work on any other Kubernetes. When your application is ready you need to choose your production environment. If you are able to use the public cloud then choose a provider that you have the most experience with. If an on-premises environment is all you can use then your journey probably starts - have a look at my comparison between Kubernetes and OpenShift, as the latter one is a dominant product for such environments. If you wish to build your own platform using open source components then have a look at my tips on building a bespoke Kubernetes cluster. At the same time start educating people in your organization about the benefits of using Kubernetes platform for running your software. Like every major change this also won’t happen quickly and you can expect resistance from people believing it’s a superfluous technology. Your job is to create a Cloud Native Center of Excellence and start dispelling the doubts arising around the topic of containers, Kubernetes and a new approach towards development and deployment. However, adoption of Kubernetes has grown significantly over the years and currently is one of the best option to provide highly-available platform and start delivering your applications faster.</summary></entry><entry><title type="html">A recipe for a bespoke on-prem Kubernetes cluster</title><link href="https://blog.cloudowski.com/articles/a-recipe-for-on-prem-kubernetes/" rel="alternate" type="text/html" title="A recipe for a bespoke on-prem Kubernetes cluster" /><published>2021-04-08T00:00:00+02:00</published><updated>2021-04-08T00:00:00+02:00</updated><id>https://blog.cloudowski.com/articles/a-recipe-for-on-prem-kubernetes</id><content type="html" xml:base="https://blog.cloudowski.com/articles/a-recipe-for-on-prem-kubernetes/">&lt;p&gt;So you want to build yourself a Kubernetes cluster? You have your reasons. Some may want to utilize the hardware they own, some may not fully trust these fancy cloud services or just simply want to have a choice and build themselves a hybrid solution.&lt;br /&gt;
There are a couple of products available that I’ve &lt;a href=&quot;/articles/which-kubernetes-for-on-prem/&quot;&gt;reviewed&lt;/a&gt;, but you’ve decided to build a platform from scratch. And again, there are a myriad of reasons why it might be a good idea and also many that would convince you it’s not worth your precious time. In this article, I will focus on providing a list of things to consider when starting a project building a Kubernetes-based platform using only the most popular open source components.&lt;/p&gt;

&lt;h2 id=&quot;target-groups&quot;&gt;Target groups&lt;/h2&gt;

&lt;p&gt;Before we jump into the technicalities, I want to describe three target groups that are referred to in the below sections.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Startups &lt;strong&gt;(SUP)&lt;/strong&gt; - very small companies or the ones with basic needs; their focus is on using basic Kubernetes API and facilitating services around it&lt;/li&gt;
  &lt;li&gt;Medium businesses &lt;strong&gt;(MBU)&lt;/strong&gt; - medium companies which want to leverage Kubernetes to boost their growth and innovation; their focus is on building a scalable platform that is also easy to maintain and extend&lt;/li&gt;
  &lt;li&gt;Enterprises &lt;strong&gt;(ENT)&lt;/strong&gt; - big companies with even bigger needs, scale, many policies, and regulations; they are the most demanding and are focused on repeatability, security, and scalability (in terms of the growing number of developers and teams working on their platform)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All these groups have different needs and thus they should build their platform in a slightly different way with different solutions applied to particular areas. I will refer to them using their abbreviations or as ALL when referring to all of them.&lt;/p&gt;

&lt;h2 id=&quot;installation&quot;&gt;Installation&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; To have a robust and automated way of management your cluster(s)&lt;/p&gt;

&lt;p&gt;When deciding on installing Kubernetes without using any available distribution you have a fairly limited choice of installers.&lt;br /&gt;
You can try using &lt;a href=&quot;https://github.com/kubernetes/kubeadm&quot;&gt;kubeadm&lt;/a&gt; directly or use more generic &lt;a href=&quot;https://github.com/kubernetes-sigs/kubespray&quot;&gt;kubespray&lt;/a&gt;. The latter one will help you not only install, but also maintain your cluster (upgrades, node replacement, cluster configuration management).&lt;br /&gt;
Both of these are universal and are unaware of how cluster nodes are provisioned. If you wish to use an automated solution that would also handle provisioning cluster nodes then &lt;a href=&quot;http://metal3.io/&quot;&gt;Metal3&lt;/a&gt; could be something you might want to try. It’s still in the alpha stage, but it looks promising.&lt;/p&gt;

&lt;p&gt;If you want a better and more cloud-native way of managing your clusters that would enable easy scaling then you may want to try &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api&quot;&gt;ClusterAPI&lt;/a&gt; project. It supports multiple cloud providers, but it can be used on on-prem environments with the aforementioned Metal3,  vSphere, or OpenStack.&lt;/p&gt;

&lt;p&gt;One more thing worth noting here: the operating system used by cluster nodes. Since the future of CentOS seems unclear, Ubuntu becomes the main building block for bespoke Kubernetes clusters. Some may want to choose a slim alternative that has replaced CoreOS - &lt;a href=&quot;https://kinvolk.io/flatcar-container-linux/&quot;&gt;Flatcar Linux&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;cluster-autoscaler&quot;&gt;Cluster autoscaler&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Highly recommended for  &lt;em&gt;ENT&lt;/em&gt;, optional for others&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Scale up and down automatically your platform&lt;/p&gt;

&lt;p&gt;If you choose ClusterAPI or your cluster uses some API in  another way to manage cluster nodes (e.g. vSphere, OpenStack, etc.) then you should also use the &lt;a href=&quot;https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler&quot;&gt;cluster autoscaler&lt;/a&gt; component. It is almost a mandatory feature for ENT but it can  also be useful for MBU organizations. By forcing nodes to be ephemeral entities that can be easily replaced/removed/added, you decrease the maintenance costs.&lt;/p&gt;

&lt;h2 id=&quot;network-cni-plugin&quot;&gt;Network CNI plugin&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Connect containers with optional additional features such as encryption&lt;/p&gt;

&lt;p&gt;The networking plugin is one of the decisions that need to be taken prudently, as it cannot be easily changed afterward.&lt;br /&gt;
To make things  brief I would shorten the list to two plugins - &lt;a href=&quot;https://www.projectcalico.org/&quot;&gt;Calico&lt;/a&gt; or &lt;a href=&quot;https://cilium.io/&quot;&gt;Cilium&lt;/a&gt;. Calico is older and maybe a little bit more mature, but Cilium looks very promising and utilizes Linux Kernel BPF. For a more detailed comparison I would suggest reading &lt;a href=&quot;https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-august-2020-6e1b757b9e49&quot;&gt;this review&lt;/a&gt; of multiple plugins.&lt;br /&gt;
Choose wisely and avoid CNI without NetworkPolicy support - having a Kubernetes cluster without the possibility to implement firewall rules is a bad idea. Both Calico and Cilium support encryption, which is a nice thing to have, but Cilium is able to encrypt  all the traffic (Calico encrypts only pod-to-pod).&lt;/p&gt;

&lt;h2 id=&quot;ingress-controller&quot;&gt;Ingress controller&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provide an easy and flexible way to expose web applications with optional advanced features&lt;/p&gt;

&lt;p&gt;Ingress is a component that can be easily swapped out when the cluster is running. Actually, you can have multiple Ingress controllers by leveraging &lt;a href=&quot;https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/&quot;&gt;IngressClass&lt;/a&gt; introduced in Kubernetes 1.18.&lt;br /&gt;
A comprehensive comparison can be found &lt;a href=&quot;https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoUMXMpUK8KlCIosvQB0f-oq3k/edit#gid=907731238&quot;&gt;here&lt;/a&gt;, but I would limit it to a select few controllers depending on your needs.&lt;/p&gt;

&lt;p&gt;For those looking for compatibility with other Kubernetes clusters (e.g. hybrid solution), I would suggest starting with the most mature and battle-tested controller - &lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/&quot;&gt;nginx ingress controller&lt;/a&gt;. The reason is simple - you need only basic features described in Ingress API that have to be implemented by every Ingress controller. That should cover 90% of cases, especially for SUP group.&lt;/p&gt;

&lt;p&gt;If more features are required (such as sophisticated http routing, authentication, authorization, etc.) then the following options are the most promising:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://projectcontour.io/&quot;&gt;Contour&lt;/a&gt; - it’s the only CNCF project that is in the Incubating maturity level group. And it’s based on Envoy which is the most flexible proxy available out there.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/datawire/ambassador&quot;&gt;Ambassador&lt;/a&gt; - has nice features, but many of them are available in the paid version. And yes - it also uses Envoy.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/haproxytech/kubernetes-ingress&quot;&gt;HAproxy&lt;/a&gt; from HAproxytech - for those who are familiar with HAproxy and want to leverage it to provide a robust Ingress controller&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://doc.traefik.io/traefik/&quot;&gt;Traefik&lt;/a&gt; - they have an awesome logo and if you’ve been using it for some Docker load-balancing then you may find it really useful for Ingress as well&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;monitoring&quot;&gt;Monitoring&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt; (unless an existing monitoring solution compatible with Kubernetes exists)&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provide insights on cluster state for operations teams&lt;/p&gt;

&lt;p&gt;There is one king here - just use Prometheus. Probably the best approach would be using an &lt;a href=&quot;https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack&quot;&gt;operator&lt;/a&gt; that would install Grafana alongside some predefined dashboards.&lt;/p&gt;

&lt;h2 id=&quot;logging&quot;&gt;Logging&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt; (unless an existing central logging solution is already is)&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provide insights on cluster state for operations teams&lt;/p&gt;

&lt;p&gt;It’s quite similar to monitoring - the majority of solutions are based on Elasticsearch, Fluentd and Kibana. This suite has broad community support and many problems have been solved and described thoroughly in many posts on the web. ALL should have a logging solution for their platforms and the easiest way to implement it is to use an operator like &lt;a href=&quot;https://operatorhub.io/operator/logging-operator&quot;&gt;this one&lt;/a&gt; or a Helm Chart like &lt;a href=&quot;https://github.com/opendistro-for-elasticsearch/community/tree/main/open-distro-elasticsearch-kubernetes/helm&quot;&gt;this&lt;/a&gt; based on &lt;a href=&quot;https://opendistro.github.io/for-elasticsearch/&quot;&gt;Open Distro&lt;/a&gt; (it’s an equivalent of Elasticsearch with more lenient/open source license).&lt;/p&gt;

&lt;h2 id=&quot;tracing&quot;&gt;Tracing&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Optional for &lt;em&gt;ALL&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provide insights and additional metrics useful for application troubleshooting and performance tuning&lt;/p&gt;

&lt;p&gt;Tracing is a feature that will be highly coveted in really big and complex environments. That’s why ENT organizations should adopt it and the best way is to implement it using &lt;a href=&quot;https://github.com/uber/jaeger&quot;&gt;Jaeger&lt;/a&gt;. It’s one of &lt;a href=&quot;https://www.cncf.io/projects/&quot;&gt;graduated&lt;/a&gt; CNCF projects which only makes it more appealing, as it’s been proven to be not only highly popular but also has a healthy community around it.&lt;br /&gt;
Implementation requires some work on the application’s part, but the service itself can be easily installed and maintained using &lt;a href=&quot;https://operatorhub.io/operator/jaeger&quot;&gt;this&lt;/a&gt; operator.&lt;/p&gt;

&lt;h2 id=&quot;backup&quot;&gt;Backup&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ENT&lt;/em&gt;, optional for the rest&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Apply the *“Redundancy is not a backup solution” approach&lt;/p&gt;

&lt;p&gt;ALL should remember that redundancy is not a backup solution. Although with a properly implemented GitOps solution, where each change of the cluster state goes through a dedicated git repository, the disaster recovery can be simplified, in many cases, it’s not enough. For those who plan to use persistent storage, I would recommend implementing &lt;a href=&quot;https://velero.io/&quot;&gt;Velero&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;storage&quot;&gt;Storage&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; For &lt;em&gt;ALL&lt;/em&gt; if stateful applications are planned to be used&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provide flexible storage for stateful applications and services&lt;/p&gt;

&lt;p&gt;The easiest use case of Kubernetes is stateless applications that don’t need any storage for keeping their state. Most microservices use some external service (such as databases) that can be deployed outside of a cluster.&lt;br /&gt;
If persistent storage is required it can still be provided using already existing solutions from outside a Kubernetes cluster. There are some drawbacks (i.e. the need to provision persistent volumes manually, less reliability and flexibility) in many of them and that’s why keeping storage inside a cluster can be a viable and efficient alternative.&lt;br /&gt;
I would limit the choices for such storage to the following projects:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://rook.io/&quot;&gt;Rook&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://openebs.io/&quot;&gt;OpenEBS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rook is the most popular and when properly implemented (e.g. deployed on a dedicated cluster or on a dedicated node pool with monitoring, alerting, etc.) can be a great way of providing storage for any kind of workloads, including even production databases (although this topic is still controversial and we all need time to accustom to this way of running them).&lt;/p&gt;

&lt;h1 id=&quot;security&quot;&gt;Security&lt;/h1&gt;

&lt;p&gt;This part is crucial for organizations that are focused on providing secure platforms for the most sensitive parts of their systems.&lt;/p&gt;

&lt;h2 id=&quot;non-root-containers&quot;&gt;Non-root containers&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ENT&lt;/em&gt; and probably &lt;em&gt;MBU&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Decrease the risk of potential exploiting of vulnerabilities found in applications or the operating system they use&lt;/p&gt;

&lt;p&gt;OpenShift made a very brave and good decision by providing a default setting that forbids running containers under the root account. I think this setting should be also implemented for ALL organizations that want to increase the level of workloads running on their Kubernetes clusters.&lt;br /&gt;
It is quite easy to achieve by implementing &lt;a href=&quot;https://kubernetes.io/docs/concepts/policy/pod-security-policy/&quot;&gt;PodSecurityPolicy&lt;/a&gt; admission controller and applying proper rules. It’s not even an external project, but it’s a low-hanging fruit that should be mandatory to implement for larger organizations. This, however, brings consequences in what images would be used on a platform. Most &lt;em&gt;”official”&lt;/em&gt; images available on Docker Hub run as root, but I see how it changes, and hopefully, it will change in the future.&lt;/p&gt;

&lt;h2 id=&quot;enforcing-policies-with-openpolicyagent&quot;&gt;Enforcing policies with OpenPolicyAgent&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ENT&lt;/em&gt;, optional for others&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Enforce security and internal policies&lt;/p&gt;

&lt;p&gt;Many organizations produce tons of security policies written down in some documents. They are often enforced by processes and audited yearly or rarely. In many cases, they aren’t adjusted to the real world and were created mostly to meet some requirements instead of protecting and ensuring best security practices are in place. It’s time to start enforcing these policies on the API level and that’s where &lt;a href=&quot;https://www.openpolicyagent.org/&quot;&gt;OpenPolicyAgent&lt;/a&gt; comes to play. Probably it’s not required for small organizations, but it’s definitely mandatory for larger ones where risks are much higher. In such organizations properly configured rules that may:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;prevent pulling images from untrusted container registries&lt;/li&gt;
  &lt;li&gt;prevent pulling images outside of a list of allowed container images&lt;/li&gt;
  &lt;li&gt;enforce the use of specific labels describing a project and its owner&lt;/li&gt;
  &lt;li&gt;enforce the applying of best practices that may have an impact on the platform reliability (e.g. defining resources and limits, use of liveness and readiness probes)&lt;/li&gt;
  &lt;li&gt;granularly restrict the use of the platform’s API (Kubernetes RBAC can’t be used to specify exceptions)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;authentication&quot;&gt;Authentication&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt;, for some &lt;em&gt;SUP&lt;/em&gt; it may be optional&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provide a way for user to authenticate and authorize to the platform&lt;/p&gt;

&lt;p&gt;This is actually a mandatory component for all organizations. One thing that may surprise many is how Kubernetes treats authentication and how it relies on external sources for providing information on users. This means almost unlimited flexibility and at the same time adds even more work and requires a few decisions to be made.&lt;br /&gt;
To make it short - you probably want something like &lt;a href=&quot;https://github.com/dexidp/dex&quot;&gt;DEX&lt;/a&gt; that acts as a proxy to your real Identity Provider (DEX supports many of these, including LDAP, SAML 2.0, and most popular OIDC providers). To make it easier to use you can add &lt;a href=&quot;https://github.com/heptiolabs/gangway&quot;&gt;Gangway&lt;/a&gt;. It’s a pair of projects that are often used together.&lt;/p&gt;

&lt;p&gt;You may find &lt;a href=&quot;https://www.keycloak.org/&quot;&gt;Keycloak&lt;/a&gt; as an alternative that is more powerful, but at the same time is also more complex and difficult to configure.&lt;/p&gt;

&lt;h2 id=&quot;better-secret-management&quot;&gt;Better secret management&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ENT&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provide a better and more secure way of handling confidential information on the platform&lt;/p&gt;

&lt;p&gt;For smaller projects and organizations encrypting Secrets in a repo where they are stored should be sufficient. Tools such as &lt;a href=&quot;https://github.com/AGWA/git-crypt&quot;&gt;git-crypt&lt;/a&gt; , &lt;a href=&quot;https://git-secret.io/&quot;&gt;git-secret&lt;/a&gt; or &lt;a href=&quot;https://github.com/mozilla/sops&quot;&gt;SOPS&lt;/a&gt; do a great job in securing these objects. I recommend especially the last one - SOPS is very universal and combined with GPG can be used to create a very robust solution.&lt;br /&gt;
For larger organizations, I would recommend implementing HashiCorp Vault which can be easily &lt;a href=&quot;https://www.vaultproject.io/docs/auth/kubernetes&quot;&gt;integrated&lt;/a&gt; with any Kubernetes cluster. It requires a bit of work and thus the use of it for small clusters with few applications seems to make no sense. For those who have dozens or even hundreds of credentials or other confidential data to store Vault can make their life easier. Auditing, built-in versioning, seamless integration, and what is the killer feature - dynamic secrets. By implementing access to external services (i.e. various cloud providers, LDAP, RabbitMQ, ssh and database servers) using credentials created on-demand with a short lifetime, you set a different level of security for your platform.&lt;/p&gt;

&lt;h2 id=&quot;security-audits&quot;&gt;Security audits&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ENT&lt;/em&gt; and &lt;em&gt;MBU&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Get more information on potential security breaches&lt;/p&gt;

&lt;p&gt;When handling a big environment, especially one that needs to be compliant with some security standards, providing a way to report suspicious activity is one of the most important requirements. Setting auditing for Kubernetes is quite easy and it can  even be enhanced by generating more granular information on specific events generated not by API components, but by containers running on a cluster. The project that brings these additional features is &lt;a href=&quot;https://falco.org/&quot;&gt;Falco&lt;/a&gt;. It’s really amazing how powerful this tool is - it uses the Linux kernel’s internal API to trace all activity of a container such as access to files, sending or receiving network traffic, access to Kubernetes API, and many, many more. The built-in rules already provide some useful information, but they need to be adjusted for specific needs to get rid of false positives and triggers when unusual activities are discovered on the cluster.&lt;/p&gt;

&lt;h2 id=&quot;container-images-security-scanning&quot;&gt;Container images security scanning&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Don’t allow to run containers with critical vulnerabilities found&lt;/p&gt;

&lt;p&gt;The platform security mostly comes down to vulnerabilities in the containers running on it. That’s why it is so important to ensure that the images used to run these containers are scanned against most critical vulnerabilities. This can be achieved in two ways - one is by scanning the images on a container registry and the other is by including an additional step in the CI/CD pipeline used for the deployment.&lt;/p&gt;

&lt;p&gt;It’s worth considering keeping container images outside of the cluster and relying on existing container registries such as &lt;a href=&quot;https://www.docker.com/pricing&quot;&gt;Docker Hub&lt;/a&gt;, &lt;a href=&quot;https://aws.amazon.com/ecr/&quot;&gt;Amazon ECR&lt;/a&gt;, &lt;a href=&quot;https://cloud.google.com/container-registry/&quot;&gt;Google GCR&lt;/a&gt; or &lt;a href=&quot;https://azure.microsoft.com/en-us/services/container-registry/&quot;&gt;Azure ACR&lt;/a&gt;. Yes - even when building an on-prem environment sometimes is just easier to use a service from a public cloud provider. It is especially beneficial for smaller organizations that don’t want to invest too much time in building a container registry and at the same time they want to provide a proper level of security and reliability.&lt;/p&gt;

&lt;p&gt;There is one major player in the on-prem container registries market that should be considered when building such a service. It’s &lt;a href=&quot;https://goharbor.io/&quot;&gt;Harbor&lt;/a&gt; which has plenty of features, including security scanning, mirroring of other registries, and replication that allows adding more nines to its availability SLO. Harbor has a built-in &lt;a href=&quot;https://github.com/aquasecurity/trivy&quot;&gt;Trivy&lt;/a&gt; scanner that works pretty well and is able to find vulnerabilities on the &lt;a href=&quot;https://aquasecurity.github.io/trivy/latest/vuln-detection/os/&quot;&gt;operating system&lt;/a&gt; level and also in the &lt;a href=&quot;https://aquasecurity.github.io/trivy/latest/vuln-detection/library/&quot;&gt;application packages&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Trivy can also be used as a standalone tool in a CI/CD pipeline to scan the container image built by one of the stages. This one-line command might protect you from serious troubles as many can be surprised by the number of critical vulnerabilities that exist even in the official docker images.&lt;/p&gt;

&lt;h1 id=&quot;extra-addons&quot;&gt;Extra addons&lt;/h1&gt;

&lt;p&gt;On top of basic Kubernetes features there are some interesting addons that extend Kubernetes basic features.&lt;/p&gt;

&lt;h2 id=&quot;user-friendly-interface&quot;&gt;User-friendly interface&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ENT&lt;/em&gt; and &lt;em&gt;MBU&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Allow less experienced users to use the platform&lt;/p&gt;

&lt;p&gt;Who doesn’t like a nice GUI that helps to get a quick overview of what’s going on with your cluster and applications running on it? Even I crave such interfaces and I spend most of my time in my command line or with my editor. These interfaces when designed properly can speed up the process of administration and just make the work with the Kubernetes environment much more pleasant.&lt;br /&gt;
The &lt;em&gt;”official”&lt;/em&gt; &lt;a href=&quot;https://github.com/kubernetes/dashboard&quot;&gt;Kubernetes dashboard&lt;/a&gt; project is very basic and it’s not the tool that I would recommend for beginners, as it may actually scare people off instead of drawing them to Kubernetes.&lt;br /&gt;
I still believe that OpenShift’s web console is one of the best, but unfortunately it cannot be easily installed with any Kubernetes cluster. If it was possible then it would definitely be my first choice.&lt;br /&gt;
&lt;a href=&quot;https://octant.dev/&quot;&gt;Octant&lt;/a&gt; looks like an interesting project that is extensible and there are already useful plugins available (e.g. Aqua Security &lt;a href=&quot;https://aquasecurity.github.io/starboard/latest/integrations/octant/&quot;&gt;Starboard&lt;/a&gt;). It’s rather a platform than a simple web console, as it actually doesn’t run inside a cluster, but on a workstation.
The other contestant in the UI category is &lt;a href=&quot;https://k8slens.dev/&quot;&gt;Lens&lt;/a&gt;. It’s also a standalone application. It works pretty well and shows nice graphs when there’s a prometheus installed on the cluster.&lt;/p&gt;

&lt;h2 id=&quot;service-mesh&quot;&gt;Service mesh&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Optional for &lt;em&gt;ALL&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Enable more advanced traffic management, more security and flexibility for the applications running on the platform&lt;/p&gt;

&lt;p&gt;Before any project name appears here there’s a fundamental question that needs to be asked here - do you really need a service mesh for your applications? I wouldn’t recommend it for organizations which just start their journey with cloud native workloads. Having an additional layer can make non-so-trivial management of containers even more complex and difficult. Maybe you want to use service mesh only to encrypt traffic? Consider a proper CNI plugin that would bring this feature transparently. Maybe advanced deployment seems like a good idea, but did you know that even basic Nginx Ingress controller &lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary&quot;&gt;supports&lt;/a&gt; canary releases? Introduce a service mesh only then when you really need a specific feature (e.g. multi-cluster communication, traffic policy, circuit breakers, etc.). Most readers would probably be better off without service mesh and for those prepared for the additional effort related to increased complexity the choice is limited to few solutions.&lt;br /&gt;
The first and most obvious one is &lt;a href=&quot;https://istio.io/&quot;&gt;Istio&lt;/a&gt;. The other that I can recommend is &lt;a href=&quot;https://www.consul.io/docs/connect&quot;&gt;Consul Connect&lt;/a&gt; from HashiCorp. The former is also the most popular one and is often provided as an add-on in the Kubernetes services in the cloud. The latter one seems to be much simpler, but also is easier to use. It’s also a part of Consul and together they enable creation and management of multi-cluster environments.&lt;/p&gt;

&lt;h2 id=&quot;external-dns&quot;&gt;External dns&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Optional for &lt;em&gt;ALL&lt;/em&gt;, recommended for dynamic environments&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Decrease the operational work involved with managing new DNS entries&lt;/p&gt;

&lt;p&gt;Smaller environments will probably not need many dns records for the external access via  load balancer or ingress services. For larger and more dynamic ones having a dedicated service managing these dns records may save a lot of time. This service is &lt;a href=&quot;https://github.com/kubernetes-sigs/external-dns&quot;&gt;external-dns&lt;/a&gt; and can be configured to manage dns records on most dns services available in the cloud and also on traditional dns servers such as bind.  This addon works best with the next one which adds TLS certificates to your web applications.&lt;/p&gt;

&lt;h2 id=&quot;cert-manager&quot;&gt;Cert-manager&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Optional for &lt;em&gt;ALL&lt;/em&gt;, recommended for dynamic environments&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Get trusted SSL certificates for free!&lt;/p&gt;

&lt;p&gt;Do you still want to pay for your SSL/TLS certificates? Thanks to Let’s Encrypt you don’t need to. But this is just one of the Let’s Encrypt’s features. Use of Let’s Encrypt has been &lt;a href=&quot;https://letsencrypt.org/stats/&quot;&gt;growing&lt;/a&gt; rapidly over the past few years. Tand the reason why is that it’s one of the things that should be at least considered as a part of the modern Kubernetes platform is how easy it is to automate. There’s a dedicated operator called &lt;a href=&quot;https://cert-manager.io/&quot;&gt;cert-manager&lt;/a&gt; that makes the whole process of requesting and refreshing certificates very quick and transparent to applications. Having trusted certificates saves a lot of time and trouble for those who manage many web services exposed externally, including test environments. Just ask anyone who had to inject custom certificate authority keys to dozens of places to make all the components talk to each other without any additional effort. And cert-manager can be used for internal Kubernetes components as well. It’s one of my favourite addons and I hope many will appreciate it as much as I do.&lt;/p&gt;

&lt;h2 id=&quot;additional-cluster-metrics&quot;&gt;Additional cluster metrics&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Mandatory for &lt;em&gt;ALL&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Get more insights and enable autoscaling&lt;/p&gt;

&lt;p&gt;There are two additional components that should be installed on clusters used in production. They are &lt;a href=&quot;https://github.com/kubernetes-sigs/metrics-server&quot;&gt;metrics-server&lt;/a&gt; and &lt;a href=&quot;https://github.com/kubernetes/kube-state-metrics&quot;&gt;kube-state-metrics&lt;/a&gt;. The first is required for the internal autoscaler (HorizontalPodAutoscaler) to work, as metrics-server exposes metrics gathered from various cluster components. I can’t imagine working with a production cluster that lack of these features and all the events that should be a part of standard security review processes and alerting systems.&lt;/p&gt;

&lt;h2 id=&quot;gitops-management&quot;&gt;GitOps management&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to apply:&lt;/strong&gt; Optional for &lt;em&gt;ALL&lt;/em&gt;, recommended for &lt;em&gt;ENT&lt;/em&gt;&lt;br /&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Decrease the operational work involved with cluster management&lt;/p&gt;

&lt;p&gt;It is not that popular yet, but cluster and environment management is going to be an important topic, especially for larger organizations where there are dozens of clusters, namespaces and hundreds of developers working on them. Management techniques involving git repositories as a source of truth are known as GitOps and they leverage the declarative nature of Kubernetes. It looks like &lt;a href=&quot;https://argoproj.github.io/argo-cd/&quot;&gt;ArgoCD&lt;/a&gt; has become a major player in this area and installing it on the cluster may bring many benefits for teams responsible for maintenance, but also for security of the whole platform.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;The aforementioned projects do not even begin to exhaust the subject of the solutions available for Kubernetes. This list merely shows how many possibilities are out there, how rich the Kubernetes ecosystem is, and finally how quickly it evolves.&lt;br /&gt;
For some it may be also surprising how standard Kubernetes lacks some features required for running production workloads. Even the multiple versions of &lt;em&gt;Kubernetes-as-a-Service&lt;/em&gt; available on major cloud platforms are missing most of these features, let alone the clusters that are built from scratch for on-prem environments. It shows how difficult this process of building a bespoke Kubernetes platform can become, but at the same time those who will manage to put it all together can be assured that their creation will bring their organization to the next level of automation, reliability, security and flexibility.&lt;br /&gt;
For the rest there’s another and easier path - using a Kubernetes-based product that has most of these features built-in.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="kubernetes" /><category term="containers" /><category term="onprem" /><summary type="html">So you want to build yourself a Kubernetes cluster? You have your reasons. Some may want to utilize the hardware they own, some may not fully trust these fancy cloud services or just simply want to have a choice and build themselves a hybrid solution. There are a couple of products available that I’ve reviewed, but you’ve decided to build a platform from scratch. And again, there are a myriad of reasons why it might be a good idea and also many that would convince you it’s not worth your precious time. In this article, I will focus on providing a list of things to consider when starting a project building a Kubernetes-based platform using only the most popular open source components. Target groups Before we jump into the technicalities, I want to describe three target groups that are referred to in the below sections. Startups (SUP) - very small companies or the ones with basic needs; their focus is on using basic Kubernetes API and facilitating services around it Medium businesses (MBU) - medium companies which want to leverage Kubernetes to boost their growth and innovation; their focus is on building a scalable platform that is also easy to maintain and extend Enterprises (ENT) - big companies with even bigger needs, scale, many policies, and regulations; they are the most demanding and are focused on repeatability, security, and scalability (in terms of the growing number of developers and teams working on their platform) All these groups have different needs and thus they should build their platform in a slightly different way with different solutions applied to particular areas. I will refer to them using their abbreviations or as ALL when referring to all of them. Installation When to apply: Mandatory for ALL Purpose: To have a robust and automated way of management your cluster(s) When deciding on installing Kubernetes without using any available distribution you have a fairly limited choice of installers. You can try using kubeadm directly or use more generic kubespray. The latter one will help you not only install, but also maintain your cluster (upgrades, node replacement, cluster configuration management). Both of these are universal and are unaware of how cluster nodes are provisioned. If you wish to use an automated solution that would also handle provisioning cluster nodes then Metal3 could be something you might want to try. It’s still in the alpha stage, but it looks promising. If you want a better and more cloud-native way of managing your clusters that would enable easy scaling then you may want to try ClusterAPI project. It supports multiple cloud providers, but it can be used on on-prem environments with the aforementioned Metal3, vSphere, or OpenStack. One more thing worth noting here: the operating system used by cluster nodes. Since the future of CentOS seems unclear, Ubuntu becomes the main building block for bespoke Kubernetes clusters. Some may want to choose a slim alternative that has replaced CoreOS - Flatcar Linux. Cluster autoscaler When to apply: Highly recommended for ENT, optional for others Purpose: Scale up and down automatically your platform If you choose ClusterAPI or your cluster uses some API in another way to manage cluster nodes (e.g. vSphere, OpenStack, etc.) then you should also use the cluster autoscaler component. It is almost a mandatory feature for ENT but it can also be useful for MBU organizations. By forcing nodes to be ephemeral entities that can be easily replaced/removed/added, you decrease the maintenance costs. Network CNI plugin When to apply: Mandatory for ALL Purpose: Connect containers with optional additional features such as encryption The networking plugin is one of the decisions that need to be taken prudently, as it cannot be easily changed afterward. To make things brief I would shorten the list to two plugins - Calico or Cilium. Calico is older and maybe a little bit more mature, but Cilium looks very promising and utilizes Linux Kernel BPF. For a more detailed comparison I would suggest reading this review of multiple plugins. Choose wisely and avoid CNI without NetworkPolicy support - having a Kubernetes cluster without the possibility to implement firewall rules is a bad idea. Both Calico and Cilium support encryption, which is a nice thing to have, but Cilium is able to encrypt all the traffic (Calico encrypts only pod-to-pod). Ingress controller When to apply: Mandatory for ALL Purpose: Provide an easy and flexible way to expose web applications with optional advanced features Ingress is a component that can be easily swapped out when the cluster is running. Actually, you can have multiple Ingress controllers by leveraging IngressClass introduced in Kubernetes 1.18. A comprehensive comparison can be found here, but I would limit it to a select few controllers depending on your needs. For those looking for compatibility with other Kubernetes clusters (e.g. hybrid solution), I would suggest starting with the most mature and battle-tested controller - nginx ingress controller. The reason is simple - you need only basic features described in Ingress API that have to be implemented by every Ingress controller. That should cover 90% of cases, especially for SUP group. If more features are required (such as sophisticated http routing, authentication, authorization, etc.) then the following options are the most promising: Contour - it’s the only CNCF project that is in the Incubating maturity level group. And it’s based on Envoy which is the most flexible proxy available out there. Ambassador - has nice features, but many of them are available in the paid version. And yes - it also uses Envoy. HAproxy from HAproxytech - for those who are familiar with HAproxy and want to leverage it to provide a robust Ingress controller Traefik - they have an awesome logo and if you’ve been using it for some Docker load-balancing then you may find it really useful for Ingress as well Monitoring When to apply: Mandatory for ALL (unless an existing monitoring solution compatible with Kubernetes exists) Purpose: Provide insights on cluster state for operations teams There is one king here - just use Prometheus. Probably the best approach would be using an operator that would install Grafana alongside some predefined dashboards. Logging When to apply: Mandatory for ALL (unless an existing central logging solution is already is) Purpose: Provide insights on cluster state for operations teams It’s quite similar to monitoring - the majority of solutions are based on Elasticsearch, Fluentd and Kibana. This suite has broad community support and many problems have been solved and described thoroughly in many posts on the web. ALL should have a logging solution for their platforms and the easiest way to implement it is to use an operator like this one or a Helm Chart like this based on Open Distro (it’s an equivalent of Elasticsearch with more lenient/open source license). Tracing When to apply: Optional for ALL Purpose: Provide insights and additional metrics useful for application troubleshooting and performance tuning Tracing is a feature that will be highly coveted in really big and complex environments. That’s why ENT organizations should adopt it and the best way is to implement it using Jaeger. It’s one of graduated CNCF projects which only makes it more appealing, as it’s been proven to be not only highly popular but also has a healthy community around it. Implementation requires some work on the application’s part, but the service itself can be easily installed and maintained using this operator. Backup When to apply: Mandatory for ENT, optional for the rest Purpose: Apply the *“Redundancy is not a backup solution” approach ALL should remember that redundancy is not a backup solution. Although with a properly implemented GitOps solution, where each change of the cluster state goes through a dedicated git repository, the disaster recovery can be simplified, in many cases, it’s not enough. For those who plan to use persistent storage, I would recommend implementing Velero. Storage When to apply: For ALL if stateful applications are planned to be used Purpose: Provide flexible storage for stateful applications and services The easiest use case of Kubernetes is stateless applications that don’t need any storage for keeping their state. Most microservices use some external service (such as databases) that can be deployed outside of a cluster. If persistent storage is required it can still be provided using already existing solutions from outside a Kubernetes cluster. There are some drawbacks (i.e. the need to provision persistent volumes manually, less reliability and flexibility) in many of them and that’s why keeping storage inside a cluster can be a viable and efficient alternative. I would limit the choices for such storage to the following projects: Rook OpenEBS Rook is the most popular and when properly implemented (e.g. deployed on a dedicated cluster or on a dedicated node pool with monitoring, alerting, etc.) can be a great way of providing storage for any kind of workloads, including even production databases (although this topic is still controversial and we all need time to accustom to this way of running them). Security This part is crucial for organizations that are focused on providing secure platforms for the most sensitive parts of their systems. Non-root containers When to apply: Mandatory for ENT and probably MBU Purpose: Decrease the risk of potential exploiting of vulnerabilities found in applications or the operating system they use OpenShift made a very brave and good decision by providing a default setting that forbids running containers under the root account. I think this setting should be also implemented for ALL organizations that want to increase the level of workloads running on their Kubernetes clusters. It is quite easy to achieve by implementing PodSecurityPolicy admission controller and applying proper rules. It’s not even an external project, but it’s a low-hanging fruit that should be mandatory to implement for larger organizations. This, however, brings consequences in what images would be used on a platform. Most ”official” images available on Docker Hub run as root, but I see how it changes, and hopefully, it will change in the future. Enforcing policies with OpenPolicyAgent When to apply: Mandatory for ENT, optional for others Purpose: Enforce security and internal policies Many organizations produce tons of security policies written down in some documents. They are often enforced by processes and audited yearly or rarely. In many cases, they aren’t adjusted to the real world and were created mostly to meet some requirements instead of protecting and ensuring best security practices are in place. It’s time to start enforcing these policies on the API level and that’s where OpenPolicyAgent comes to play. Probably it’s not required for small organizations, but it’s definitely mandatory for larger ones where risks are much higher. In such organizations properly configured rules that may: prevent pulling images from untrusted container registries prevent pulling images outside of a list of allowed container images enforce the use of specific labels describing a project and its owner enforce the applying of best practices that may have an impact on the platform reliability (e.g. defining resources and limits, use of liveness and readiness probes) granularly restrict the use of the platform’s API (Kubernetes RBAC can’t be used to specify exceptions) Authentication When to apply: Mandatory for ALL, for some SUP it may be optional Purpose: Provide a way for user to authenticate and authorize to the platform This is actually a mandatory component for all organizations. One thing that may surprise many is how Kubernetes treats authentication and how it relies on external sources for providing information on users. This means almost unlimited flexibility and at the same time adds even more work and requires a few decisions to be made. To make it short - you probably want something like DEX that acts as a proxy to your real Identity Provider (DEX supports many of these, including LDAP, SAML 2.0, and most popular OIDC providers). To make it easier to use you can add Gangway. It’s a pair of projects that are often used together. You may find Keycloak as an alternative that is more powerful, but at the same time is also more complex and difficult to configure. Better secret management When to apply: Mandatory for ENT Purpose: Provide a better and more secure way of handling confidential information on the platform For smaller projects and organizations encrypting Secrets in a repo where they are stored should be sufficient. Tools such as git-crypt , git-secret or SOPS do a great job in securing these objects. I recommend especially the last one - SOPS is very universal and combined with GPG can be used to create a very robust solution. For larger organizations, I would recommend implementing HashiCorp Vault which can be easily integrated with any Kubernetes cluster. It requires a bit of work and thus the use of it for small clusters with few applications seems to make no sense. For those who have dozens or even hundreds of credentials or other confidential data to store Vault can make their life easier. Auditing, built-in versioning, seamless integration, and what is the killer feature - dynamic secrets. By implementing access to external services (i.e. various cloud providers, LDAP, RabbitMQ, ssh and database servers) using credentials created on-demand with a short lifetime, you set a different level of security for your platform. Security audits When to apply: Mandatory for ENT and MBU Purpose: Get more information on potential security breaches When handling a big environment, especially one that needs to be compliant with some security standards, providing a way to report suspicious activity is one of the most important requirements. Setting auditing for Kubernetes is quite easy and it can even be enhanced by generating more granular information on specific events generated not by API components, but by containers running on a cluster. The project that brings these additional features is Falco. It’s really amazing how powerful this tool is - it uses the Linux kernel’s internal API to trace all activity of a container such as access to files, sending or receiving network traffic, access to Kubernetes API, and many, many more. The built-in rules already provide some useful information, but they need to be adjusted for specific needs to get rid of false positives and triggers when unusual activities are discovered on the cluster. Container images security scanning When to apply: Mandatory for ALL Purpose: Don’t allow to run containers with critical vulnerabilities found The platform security mostly comes down to vulnerabilities in the containers running on it. That’s why it is so important to ensure that the images used to run these containers are scanned against most critical vulnerabilities. This can be achieved in two ways - one is by scanning the images on a container registry and the other is by including an additional step in the CI/CD pipeline used for the deployment. It’s worth considering keeping container images outside of the cluster and relying on existing container registries such as Docker Hub, Amazon ECR, Google GCR or Azure ACR. Yes - even when building an on-prem environment sometimes is just easier to use a service from a public cloud provider. It is especially beneficial for smaller organizations that don’t want to invest too much time in building a container registry and at the same time they want to provide a proper level of security and reliability. There is one major player in the on-prem container registries market that should be considered when building such a service. It’s Harbor which has plenty of features, including security scanning, mirroring of other registries, and replication that allows adding more nines to its availability SLO. Harbor has a built-in Trivy scanner that works pretty well and is able to find vulnerabilities on the operating system level and also in the application packages. Trivy can also be used as a standalone tool in a CI/CD pipeline to scan the container image built by one of the stages. This one-line command might protect you from serious troubles as many can be surprised by the number of critical vulnerabilities that exist even in the official docker images. Extra addons On top of basic Kubernetes features there are some interesting addons that extend Kubernetes basic features. User-friendly interface When to apply: Mandatory for ENT and MBU Purpose: Allow less experienced users to use the platform Who doesn’t like a nice GUI that helps to get a quick overview of what’s going on with your cluster and applications running on it? Even I crave such interfaces and I spend most of my time in my command line or with my editor. These interfaces when designed properly can speed up the process of administration and just make the work with the Kubernetes environment much more pleasant. The ”official” Kubernetes dashboard project is very basic and it’s not the tool that I would recommend for beginners, as it may actually scare people off instead of drawing them to Kubernetes. I still believe that OpenShift’s web console is one of the best, but unfortunately it cannot be easily installed with any Kubernetes cluster. If it was possible then it would definitely be my first choice. Octant looks like an interesting project that is extensible and there are already useful plugins available (e.g. Aqua Security Starboard). It’s rather a platform than a simple web console, as it actually doesn’t run inside a cluster, but on a workstation. The other contestant in the UI category is Lens. It’s also a standalone application. It works pretty well and shows nice graphs when there’s a prometheus installed on the cluster. Service mesh When to apply: Optional for ALL Purpose: Enable more advanced traffic management, more security and flexibility for the applications running on the platform Before any project name appears here there’s a fundamental question that needs to be asked here - do you really need a service mesh for your applications? I wouldn’t recommend it for organizations which just start their journey with cloud native workloads. Having an additional layer can make non-so-trivial management of containers even more complex and difficult. Maybe you want to use service mesh only to encrypt traffic? Consider a proper CNI plugin that would bring this feature transparently. Maybe advanced deployment seems like a good idea, but did you know that even basic Nginx Ingress controller supports canary releases? Introduce a service mesh only then when you really need a specific feature (e.g. multi-cluster communication, traffic policy, circuit breakers, etc.). Most readers would probably be better off without service mesh and for those prepared for the additional effort related to increased complexity the choice is limited to few solutions. The first and most obvious one is Istio. The other that I can recommend is Consul Connect from HashiCorp. The former is also the most popular one and is often provided as an add-on in the Kubernetes services in the cloud. The latter one seems to be much simpler, but also is easier to use. It’s also a part of Consul and together they enable creation and management of multi-cluster environments. External dns When to apply: Optional for ALL, recommended for dynamic environments Purpose: Decrease the operational work involved with managing new DNS entries Smaller environments will probably not need many dns records for the external access via load balancer or ingress services. For larger and more dynamic ones having a dedicated service managing these dns records may save a lot of time. This service is external-dns and can be configured to manage dns records on most dns services available in the cloud and also on traditional dns servers such as bind. This addon works best with the next one which adds TLS certificates to your web applications. Cert-manager When to apply: Optional for ALL, recommended for dynamic environments Purpose: Get trusted SSL certificates for free! Do you still want to pay for your SSL/TLS certificates? Thanks to Let’s Encrypt you don’t need to. But this is just one of the Let’s Encrypt’s features. Use of Let’s Encrypt has been growing rapidly over the past few years. Tand the reason why is that it’s one of the things that should be at least considered as a part of the modern Kubernetes platform is how easy it is to automate. There’s a dedicated operator called cert-manager that makes the whole process of requesting and refreshing certificates very quick and transparent to applications. Having trusted certificates saves a lot of time and trouble for those who manage many web services exposed externally, including test environments. Just ask anyone who had to inject custom certificate authority keys to dozens of places to make all the components talk to each other without any additional effort. And cert-manager can be used for internal Kubernetes components as well. It’s one of my favourite addons and I hope many will appreciate it as much as I do. Additional cluster metrics When to apply: Mandatory for ALL Purpose: Get more insights and enable autoscaling There are two additional components that should be installed on clusters used in production. They are metrics-server and kube-state-metrics. The first is required for the internal autoscaler (HorizontalPodAutoscaler) to work, as metrics-server exposes metrics gathered from various cluster components. I can’t imagine working with a production cluster that lack of these features and all the events that should be a part of standard security review processes and alerting systems. GitOps management When to apply: Optional for ALL, recommended for ENT Purpose: Decrease the operational work involved with cluster management It is not that popular yet, but cluster and environment management is going to be an important topic, especially for larger organizations where there are dozens of clusters, namespaces and hundreds of developers working on them. Management techniques involving git repositories as a source of truth are known as GitOps and they leverage the declarative nature of Kubernetes. It looks like ArgoCD has become a major player in this area and installing it on the cluster may bring many benefits for teams responsible for maintenance, but also for security of the whole platform. Conclusion The aforementioned projects do not even begin to exhaust the subject of the solutions available for Kubernetes. This list merely shows how many possibilities are out there, how rich the Kubernetes ecosystem is, and finally how quickly it evolves. For some it may be also surprising how standard Kubernetes lacks some features required for running production workloads. Even the multiple versions of Kubernetes-as-a-Service available on major cloud platforms are missing most of these features, let alone the clusters that are built from scratch for on-prem environments. It shows how difficult this process of building a bespoke Kubernetes platform can become, but at the same time those who will manage to put it all together can be assured that their creation will bring their organization to the next level of automation, reliability, security and flexibility. For the rest there’s another and easier path - using a Kubernetes-based product that has most of these features built-in.</summary></entry><entry><title type="html">Which Kubernetes distribution to choose for on-prem environments?</title><link href="https://blog.cloudowski.com/articles/which-kubernetes-for-on-prem/" rel="alternate" type="text/html" title="Which Kubernetes distribution to choose for on-prem environments?" /><published>2021-01-30T00:00:00+01:00</published><updated>2021-01-30T00:00:00+01:00</updated><id>https://blog.cloudowski.com/articles/which-kubernetes-for-on-prem</id><content type="html" xml:base="https://blog.cloudowski.com/articles/which-kubernetes-for-on-prem/">&lt;p&gt;Most people think that Kubernetes was designed to bring more features and more abstraction layers to cloud environments. Well, I think the biggest benefits can be achieved in on-premise environments, because of the big gap between those environments and the ones that can be easily created in the cloud. This opens up many excellent opportunities for organizations which for some reasons choose to stay outside of the public cloud.&lt;br /&gt;
In order to leverage Kubernetes using on-premise hardware, one of the biggest decisions that needs to be made which software platform to use for Kubernetes. According to the &lt;a href=&quot;https://kubernetes.io/partners/#conformance&quot;&gt;official&lt;/a&gt; listing of available Kubernetes distributions, there are dozens of options available. If you look closely at them, however, there are only a few viable ones, as many of them are either inactive or have been merged with other projects (e.g. Pivotal Kubernetes Service merged with VMware Tanzu). I expect that 3-5 of these distributions will eventually prevail in the next 2 years and they will target their own niche market segments.&lt;br /&gt;
Let’s have a look at those that have stayed in the game and can be used as a foundation for a highly automated on-premise platform.&lt;/p&gt;

&lt;h2 id=&quot;1-openshift&quot;&gt;1. OpenShift&lt;/h2&gt;

&lt;p&gt;I’ll start with the obvious and probably the best choice there is - OpenShift Container Platform. I’ve written about this product many times and still there’s no better Kubernetes distribution available on the market that is so rich in features. This also comes with its biggest disadvantage - the price that for some is just too high. OpenShift is Red Hat’s flagship product that is targeted at enterprises. Of course they sell it to medium or even small companies, but the main target group is big enterprises with a big budget. It has also become a platform for Red Hat’s other products or other vendors’ services that are easily installable and available at &lt;a href=&quot;https://www.operatorhub.io/&quot;&gt;https://www.operatorhub.io/&lt;/a&gt;.&lt;br /&gt;
OpenShift can be installed in the cloud, but it’s on-premise environments is where it shows its most powerful features. Almost every piece of it is highly automated and this enables easy maintenance of clusters (installation, upgrades and scaling), rapid deployment of supplementary services (databases, service mesh) and platform configuration. There is no other distribution that has achieved that level of automation. OpenShift is also the most complete solution which includes integrated logging, monitoring and CI/CD (although they are still working on switching from Jenkins to Tekton engine which is not that feature-rich yet).&lt;/p&gt;

&lt;h3 id=&quot;when-to-choose-openshift&quot;&gt;When to choose OpenShift&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;If you have a big budget - money can’t bring happiness, but it can buy you the best Kubernetes distribution, so why hesitate?&lt;/li&gt;
  &lt;li&gt;If you want to have the easiest and smoothest experience with Kubernetes - a user-friendly web console that is second to none and comprehensive documentation.&lt;/li&gt;
  &lt;li&gt;You don’t plan to scale rapidly but you need a bulletproof solution - OpenShift can be great for even small environments and as long as they won’t grow it can be financially reasonable&lt;/li&gt;
  &lt;li&gt;Your organization has few DevOps/Ops people - OpenShift is less demanding from a maintenance perspective and may help to overcome problems with finding highly skilled Kubernetes and infrastructure experts&lt;/li&gt;
  &lt;li&gt;The systems that your organization builds are complex - in cases where the development and deployment processes require a lot of additional services, there’s no better way to create and maintain clusters on on-premise environments than by using operators (and buying additional support for them if needed)&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;If you need support (?) - I’ve put it here just for the sake of providing some reasonable justification for the high price of an OpenShift subscription, but unfortunately many customers are not satisfied with the level of product support and thus it’s not the biggest advantage here&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;when-to-avoid-openshift&quot;&gt;When to avoid OpenShift&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;All you need is Kubernetes API - maybe all these fancy features are just superfluous and just plain Kubernetes distribution is enough, provided that you have a team of skilled people that could build and maintain it&lt;/li&gt;
  &lt;li&gt;If your budget is tight - that’s obvious, but many believe they can somehow overcome the high price of OpenShift by efficiently bin packing their workloads on smaller clusters  or get a real bargain when ordering their subscriptions (I guess it’s possible, but only for really big orders for hundreds of nodes)&lt;/li&gt;
  &lt;li&gt;Your organization is an avid supporter of open source projects and avoids any potential vendor lock-ins - although OpenShift includes Kubernetes and can be fully compatible with other Kubernetes distributions, there are some areas where a potential vendor lock-in can occur (e.g. reliance on builtin operators and their APIs)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;2-okd&quot;&gt;2. OKD&lt;/h2&gt;

&lt;p&gt;Back in the day Red Hat used upstream-downstream strategy for product development where open source upstream projects were free to use and their downstream, commercial products were heavily dependent on their upstreams and built on top of them. That has changed with OpenShift 4 where its open source equivalent - OKD - was released months after OpenShift had been redesigned, with help from guys from CoreOS (Red Hat acquired CoreOS in 2018).&lt;br /&gt;
So OKD is an open source version of OpenShift and it’s free. It’s a similar strategy that Red Hat has been using for years - to attract people and accustom them to the free (upstream) versions and also give them a very similar experience to their paid products. The only difference is of course lack of support and few features that are available in OpenShift only. That’s what the key factors to consider are when deciding on a Kubernetes platform - does your organization need support or will it get by without it?&lt;br /&gt;
Things got a little bit more complicated after Red Hat (who own CentOS project) has &lt;a href=&quot;https://blog.centos.org/2020/12/future-is-centos-stream/&quot;&gt;announced&lt;/a&gt; that CentOS 8 will cease to exist in the form that has been known for years. CentOS is widely used by many companies as a free version of RHEL (Red Hat Enterprise Linux) and it looks like it has changed and we don’t know what IBM will do with OKD (I suspect it was their business decision to pull the plug). There’s a risk that OKD will also no longer be developed, or at least it will not resemble OpenShift like it does now.&lt;br /&gt;
As for now being still very similar to OpenShift, OKD can be also considered as one of the best Kubernetes platforms to use for on-premise installations.&lt;/p&gt;

&lt;h3 id=&quot;when-to-choose-okd&quot;&gt;When to choose OKD&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;You don’t care about Red Hat addons, but still need a highly automated platform - OKD can still brings your environment to a completely different level by leveraging operators, builtin services (i.e. logging, monitoring)&lt;/li&gt;
  &lt;li&gt;You don’t need support, because you have really smart people with Kubernetes skills - either you pay Red Hat for its support or build an internal team that would act as 1st, 2nd and 3rd line of support (not mentioning the vast resources available on the web)&lt;/li&gt;
  &lt;li&gt;You plan to run internal workloads only without exposing them outside - Red Hat brags about providing curated list of container images while OKD relies on community’s work on providing security patches and this causes some delays; for some this can be an acceptable risk, especially if the platform is used internally&lt;/li&gt;
  &lt;li&gt;You need a Kubernetes distribution that is user-friendly  - web console in OKD is almost identical to the one in OpenShift which I already described before as second to none; it often helps less experienced users to use it and even more experienced ones can use it to perform daily tasks even faster by leveraging all the information gathered in a concise form&lt;/li&gt;
  &lt;li&gt;You want to decrease costs of OpenShift and use it for testing environments only - this idea seems to be reasonable from the economic point of view and if planned and executed well it makes sense; there are some caveats though (e.g. it is against Red Hat license to use most of their container images)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;when-to-avoid-okd&quot;&gt;When to avoid OKD&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Plain Kubernetes is all you need - with all these features comes complexity that may be just not what your organization needs and you’d be better off with some simpler Kubernetes distribution&lt;/li&gt;
  &lt;li&gt;You expect quick fixes and patches - don’t get me wrong, it looks like they are delivered, but it’s not guaranteed and relies solely on community (e.g. for OpenShift Origin 3, a predecessor of OKD, some container images used internally by the platform haven’t been updated for months whereas OpenShift provided updates fairly quickly)&lt;/li&gt;
  &lt;li&gt;You need a stable and predictable platform - nobody expected CentOS 8 would no longer be an equivalent to RHEL and so similar decisions of IBM executives can affect OKD and there’s a risk that sometime in the future all OKD users would have no choice but to migrate to some other solution&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;3-rancher&quot;&gt;3. Rancher&lt;/h2&gt;

&lt;p&gt;After Rancher had been &lt;a href=&quot;https://rancher.com/blog/2020/suse-to-acquire-rancher/&quot;&gt;accquired&lt;/a&gt; by SUSE, a new chapter opened for this niche player on the market. Although SUSE already had their own Kubernetes &lt;a href=&quot;https://www.suse.com/products/caas-platform/&quot;&gt;solution&lt;/a&gt;, it’s likely that they will only have a single offering of that type and it’s going to be Rancher.&lt;br /&gt;
Basically, Rancher offers an easy management of multiple Kubernetes clusters that can be provisioned manually and imported into the Cluster Manager management panel or provisioned by Rancher using its own Kubernetes distribution. They call it RKE - Rancher Kubernetes Engine and it can be installed on most major cloud providers, but also on vSphere. Managing multiple clusters using Rancher is very easy and combining it with plenty of authentication options makes it a really compelling solution for those who plan to manage hybrid, multi-cluster, or even multi-cloud environments.&lt;br /&gt;
I think that Rancher has initiated many interesting projects, including K3S (simpler Kubernetes control plane targeted for edge computing) , RKE (the aforementioned Kubernetes distribution), and Longhorn (distributed storage). You can see they are in the middle of an intensive development cycle - even by looking at the Rancher’s inconsistent UI which is divided into two: Cluster Manager with a fresh look, decent list of options, and Cluster Explorer that is less pleasant, but offers more insights. Let’s hope they will continue improving Rancher and its RKE to be even more usable so that it would become an even more compelling Kubernetes platform for on-premise environments.&lt;/p&gt;

&lt;h3 id=&quot;when-to-choose-rancher&quot;&gt;When to choose Rancher&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;If you already have VMware vSphere - Rancher makes it very easy to spawn new on-premise clusters by leveraging vSphere API&lt;/li&gt;
  &lt;li&gt;If you plan to maintain many clusters (all on-premise, hybrid or multi-cloud) - it’s just easier to manage them from a single place where you log in using unified credentials (it’s very easy to set up authentication against &lt;a href=&quot;https://rancher.com/docs/rancher/v2.x/en/admin-settings/authentication/&quot;&gt;various services&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;You focus on platform maintenance more than on features supporting development - with nice integrated backup solution, CIS benchmark engine and only few developer-focused solution (I think their CI/CD solution was put there just for the sake of marketing purposes - it’s barely usable) it’s just more appealing to infrastructure teams&lt;/li&gt;
  &lt;li&gt;If you really need paid support for your Kubernetes environment - Rancher provides support for its product, including their own Kubernetes distribution (RKE) as well as for custom installations; When it comes to price it’s a mystery that will be revealed when you contact Sales&lt;/li&gt;
  &lt;li&gt;You need a browser-optimized access to your environment - with builtin shell it’s very easy to access cluster resources without configuring anything on a local machine&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;when-to-avoid-rancher&quot;&gt;When to avoid Rancher&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;You don’t care about fancy features - although there are significantly less features in Rancher than in OpenShift or OKD, it is still more than just a nice UI and some may find it redundant and can get by without them&lt;/li&gt;
  &lt;li&gt;You’re interested in more mature products - it looks like Rancher has been in an active development over the past few months and probably it is going to be redesigned and some point, just like it happened with OpenShift  (version 3 and 4 are very different)&lt;/li&gt;
  &lt;li&gt;You don’t plan or need to use multiple clusters - maybe one is enough?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;4-vmware-tanzu&quot;&gt;4. VMware Tanzu&lt;/h2&gt;

&lt;p&gt;The last contender is Tanzu from the biggest on-premise virtualization software vendor. When they &lt;a href=&quot;https://blogs.vmware.com/vsphere/2019/08/project-pacific-technical-overview.html&quot;&gt;announced&lt;/a&gt; project Pacific I knew it was going to be huge. And it is. Tanzu is a set of products that leverage Kubernetes and integrate them with vSphere. The product that manages Kubernetes clusters is called Tanzu Kubernetes Grid (TKG) and it’s just the beginning of Tanzu offering. There’s Tanzu Mission Control for managing multiple clusters, Tanzu Observability for.. observability, Tanzu Service Mesh for.. yes, it’s their service mesh, and many more. For anyone familiar with enterprise offering it may resemble any other product suite from a big giant like IBM, Oracle and so on.&lt;br /&gt;
Let’s be honest here - Tanzu is not for anyone that is interested in “some” Kubernetes, it’s for enterprises accustomed to enterprise products and everything that comes with it (i.e. sales, support, software that can be downloaded only for authorized users, etc.). And it’s especially designed for those whose infrastructure is based on the VMware ecosystem - it’s a perfect addition that meets requirements of development teams within an organization, but also addresses operations teams concerts with the same tools that’s been known for over a decade now.&lt;br /&gt;
When it comes to features they are pretty standard - easy authentication, cluster scaling, build services based on &lt;a href=&quot;https://buildpacks.io/&quot;&gt;buildpacks&lt;/a&gt;, networking integrated with VMware NSX, storage integrated with vSphere - wait, it’s starting to sound like a feature list of another vSphere addon. I guess it is an addon. For those looking for fancy features I suggest waiting a bit more for VMware to come up with new Tanzu products (or for a new acquisition of another company from cloud native world like they did with &lt;a href=&quot;https://cloud.vmware.com/community/2019/05/15/vmware-to-acquire-bitnami/&quot;&gt;Bitnami&lt;/a&gt;).&lt;/p&gt;

&lt;h3 id=&quot;when-to-choose-tanzu&quot;&gt;When to choose Tanzu&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;When your company already uses VMware vSphere - just contact your VMware sales guy who will prepare you an offer and the team that takes care of your infrastructure will do the rest&lt;/li&gt;
  &lt;li&gt;If you don’t plan to deploy anything outside of your own infrastructure - although VMware tries to be a hybrid provider by enabling integration with AWS or GCP, it will stay focused on on-premise market where it’s undeniably the leader&lt;/li&gt;
  &lt;li&gt;If you wish to use multiple clusters - Tanzu enables easy creation of Kubernetes clusters that can be assigned to development teams&lt;/li&gt;
  &lt;li&gt;If you need support - it’s an enterprise product with enterprise support&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;when-to-avoid-tanzu&quot;&gt;When to avoid Tanzu&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;If you don’t have already vSphere in your organization - you need vSphere and its ecosystem, that Tanzu is a part of, to start working with VMware’s Kubernetes services; otherwise it will cost you a lot more time and resources to install it just to leverage them&lt;/li&gt;
  &lt;li&gt;When you need more features integrated with the platform - although Tanzu provides interesting features (my favourite is &lt;a href=&quot;https://tanzu.vmware.com/build-service&quot;&gt;Tanzu Build Service&lt;/a&gt;) it still lacks of some distinguished ones (although they provide some for you to install on your own from &lt;a href=&quot;https://tanzu.vmware.com/solutions-hub&quot;&gt;Solutions Hub&lt;/a&gt;) that would make it more appealing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I have chosen these four solutions for Kubernetes on-premise platform because I believe they provide a real alternative to custom-built clusters. These products make it easier to build and maintain production clusters, but also in many cases help to speed up the development process and provide insights for the deployment process as well.&lt;br /&gt;
So here’s what I would do if I were to choose one:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;if I had a big budget I would go with OpenShift, as it’s just the best&lt;/li&gt;
  &lt;li&gt;if I had a big budget &lt;strong&gt;and&lt;/strong&gt; already existing VMware vSphere infrastructure I would consider Tanzu&lt;/li&gt;
  &lt;li&gt;if I had skilled Kubernetes people in my organization and I wanted to have an easy way to manage my clusters (provisioned manually) without vSphere I would choose Rancher (and optionally I would buy a support for those clusters when going to prod)&lt;/li&gt;
  &lt;li&gt;if I had skilled Kubernetes people in my organization and I would like to use these fancy OpenShift features I would go with OKD, as it’s the best alternative to custom-built Kubernetes cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s not all. Of course you can build your own Kubernetes cluster and it’s a path that is chosen by many organizations. There are many caveats and conditions that need to be met (e.g. scale of such endeavour, type of workloads to be deployed on it) for this to succeed. But that’s a different story which I hope to cover in some other article.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="kubernetes" /><category term="containers" /><category term="onprem" /><category term="openshift" /><category term="rancher" /><category term="tanzu" /><category term="okd" /><summary type="html">Most people think that Kubernetes was designed to bring more features and more abstraction layers to cloud environments. Well, I think the biggest benefits can be achieved in on-premise environments, because of the big gap between those environments and the ones that can be easily created in the cloud. This opens up many excellent opportunities for organizations which for some reasons choose to stay outside of the public cloud. In order to leverage Kubernetes using on-premise hardware, one of the biggest decisions that needs to be made which software platform to use for Kubernetes. According to the official listing of available Kubernetes distributions, there are dozens of options available. If you look closely at them, however, there are only a few viable ones, as many of them are either inactive or have been merged with other projects (e.g. Pivotal Kubernetes Service merged with VMware Tanzu). I expect that 3-5 of these distributions will eventually prevail in the next 2 years and they will target their own niche market segments. Let’s have a look at those that have stayed in the game and can be used as a foundation for a highly automated on-premise platform. 1. OpenShift I’ll start with the obvious and probably the best choice there is - OpenShift Container Platform. I’ve written about this product many times and still there’s no better Kubernetes distribution available on the market that is so rich in features. This also comes with its biggest disadvantage - the price that for some is just too high. OpenShift is Red Hat’s flagship product that is targeted at enterprises. Of course they sell it to medium or even small companies, but the main target group is big enterprises with a big budget. It has also become a platform for Red Hat’s other products or other vendors’ services that are easily installable and available at https://www.operatorhub.io/. OpenShift can be installed in the cloud, but it’s on-premise environments is where it shows its most powerful features. Almost every piece of it is highly automated and this enables easy maintenance of clusters (installation, upgrades and scaling), rapid deployment of supplementary services (databases, service mesh) and platform configuration. There is no other distribution that has achieved that level of automation. OpenShift is also the most complete solution which includes integrated logging, monitoring and CI/CD (although they are still working on switching from Jenkins to Tekton engine which is not that feature-rich yet). When to choose OpenShift If you have a big budget - money can’t bring happiness, but it can buy you the best Kubernetes distribution, so why hesitate? If you want to have the easiest and smoothest experience with Kubernetes - a user-friendly web console that is second to none and comprehensive documentation. You don’t plan to scale rapidly but you need a bulletproof solution - OpenShift can be great for even small environments and as long as they won’t grow it can be financially reasonable Your organization has few DevOps/Ops people - OpenShift is less demanding from a maintenance perspective and may help to overcome problems with finding highly skilled Kubernetes and infrastructure experts The systems that your organization builds are complex - in cases where the development and deployment processes require a lot of additional services, there’s no better way to create and maintain clusters on on-premise environments than by using operators (and buying additional support for them if needed) If you need support (?) - I’ve put it here just for the sake of providing some reasonable justification for the high price of an OpenShift subscription, but unfortunately many customers are not satisfied with the level of product support and thus it’s not the biggest advantage here When to avoid OpenShift All you need is Kubernetes API - maybe all these fancy features are just superfluous and just plain Kubernetes distribution is enough, provided that you have a team of skilled people that could build and maintain it If your budget is tight - that’s obvious, but many believe they can somehow overcome the high price of OpenShift by efficiently bin packing their workloads on smaller clusters or get a real bargain when ordering their subscriptions (I guess it’s possible, but only for really big orders for hundreds of nodes) Your organization is an avid supporter of open source projects and avoids any potential vendor lock-ins - although OpenShift includes Kubernetes and can be fully compatible with other Kubernetes distributions, there are some areas where a potential vendor lock-in can occur (e.g. reliance on builtin operators and their APIs) 2. OKD Back in the day Red Hat used upstream-downstream strategy for product development where open source upstream projects were free to use and their downstream, commercial products were heavily dependent on their upstreams and built on top of them. That has changed with OpenShift 4 where its open source equivalent - OKD - was released months after OpenShift had been redesigned, with help from guys from CoreOS (Red Hat acquired CoreOS in 2018). So OKD is an open source version of OpenShift and it’s free. It’s a similar strategy that Red Hat has been using for years - to attract people and accustom them to the free (upstream) versions and also give them a very similar experience to their paid products. The only difference is of course lack of support and few features that are available in OpenShift only. That’s what the key factors to consider are when deciding on a Kubernetes platform - does your organization need support or will it get by without it? Things got a little bit more complicated after Red Hat (who own CentOS project) has announced that CentOS 8 will cease to exist in the form that has been known for years. CentOS is widely used by many companies as a free version of RHEL (Red Hat Enterprise Linux) and it looks like it has changed and we don’t know what IBM will do with OKD (I suspect it was their business decision to pull the plug). There’s a risk that OKD will also no longer be developed, or at least it will not resemble OpenShift like it does now. As for now being still very similar to OpenShift, OKD can be also considered as one of the best Kubernetes platforms to use for on-premise installations. When to choose OKD You don’t care about Red Hat addons, but still need a highly automated platform - OKD can still brings your environment to a completely different level by leveraging operators, builtin services (i.e. logging, monitoring) You don’t need support, because you have really smart people with Kubernetes skills - either you pay Red Hat for its support or build an internal team that would act as 1st, 2nd and 3rd line of support (not mentioning the vast resources available on the web) You plan to run internal workloads only without exposing them outside - Red Hat brags about providing curated list of container images while OKD relies on community’s work on providing security patches and this causes some delays; for some this can be an acceptable risk, especially if the platform is used internally You need a Kubernetes distribution that is user-friendly - web console in OKD is almost identical to the one in OpenShift which I already described before as second to none; it often helps less experienced users to use it and even more experienced ones can use it to perform daily tasks even faster by leveraging all the information gathered in a concise form You want to decrease costs of OpenShift and use it for testing environments only - this idea seems to be reasonable from the economic point of view and if planned and executed well it makes sense; there are some caveats though (e.g. it is against Red Hat license to use most of their container images) When to avoid OKD Plain Kubernetes is all you need - with all these features comes complexity that may be just not what your organization needs and you’d be better off with some simpler Kubernetes distribution You expect quick fixes and patches - don’t get me wrong, it looks like they are delivered, but it’s not guaranteed and relies solely on community (e.g. for OpenShift Origin 3, a predecessor of OKD, some container images used internally by the platform haven’t been updated for months whereas OpenShift provided updates fairly quickly) You need a stable and predictable platform - nobody expected CentOS 8 would no longer be an equivalent to RHEL and so similar decisions of IBM executives can affect OKD and there’s a risk that sometime in the future all OKD users would have no choice but to migrate to some other solution 3. Rancher After Rancher had been accquired by SUSE, a new chapter opened for this niche player on the market. Although SUSE already had their own Kubernetes solution, it’s likely that they will only have a single offering of that type and it’s going to be Rancher. Basically, Rancher offers an easy management of multiple Kubernetes clusters that can be provisioned manually and imported into the Cluster Manager management panel or provisioned by Rancher using its own Kubernetes distribution. They call it RKE - Rancher Kubernetes Engine and it can be installed on most major cloud providers, but also on vSphere. Managing multiple clusters using Rancher is very easy and combining it with plenty of authentication options makes it a really compelling solution for those who plan to manage hybrid, multi-cluster, or even multi-cloud environments. I think that Rancher has initiated many interesting projects, including K3S (simpler Kubernetes control plane targeted for edge computing) , RKE (the aforementioned Kubernetes distribution), and Longhorn (distributed storage). You can see they are in the middle of an intensive development cycle - even by looking at the Rancher’s inconsistent UI which is divided into two: Cluster Manager with a fresh look, decent list of options, and Cluster Explorer that is less pleasant, but offers more insights. Let’s hope they will continue improving Rancher and its RKE to be even more usable so that it would become an even more compelling Kubernetes platform for on-premise environments. When to choose Rancher If you already have VMware vSphere - Rancher makes it very easy to spawn new on-premise clusters by leveraging vSphere API If you plan to maintain many clusters (all on-premise, hybrid or multi-cloud) - it’s just easier to manage them from a single place where you log in using unified credentials (it’s very easy to set up authentication against various services) You focus on platform maintenance more than on features supporting development - with nice integrated backup solution, CIS benchmark engine and only few developer-focused solution (I think their CI/CD solution was put there just for the sake of marketing purposes - it’s barely usable) it’s just more appealing to infrastructure teams If you really need paid support for your Kubernetes environment - Rancher provides support for its product, including their own Kubernetes distribution (RKE) as well as for custom installations; When it comes to price it’s a mystery that will be revealed when you contact Sales You need a browser-optimized access to your environment - with builtin shell it’s very easy to access cluster resources without configuring anything on a local machine When to avoid Rancher You don’t care about fancy features - although there are significantly less features in Rancher than in OpenShift or OKD, it is still more than just a nice UI and some may find it redundant and can get by without them You’re interested in more mature products - it looks like Rancher has been in an active development over the past few months and probably it is going to be redesigned and some point, just like it happened with OpenShift (version 3 and 4 are very different) You don’t plan or need to use multiple clusters - maybe one is enough? 4. VMware Tanzu The last contender is Tanzu from the biggest on-premise virtualization software vendor. When they announced project Pacific I knew it was going to be huge. And it is. Tanzu is a set of products that leverage Kubernetes and integrate them with vSphere. The product that manages Kubernetes clusters is called Tanzu Kubernetes Grid (TKG) and it’s just the beginning of Tanzu offering. There’s Tanzu Mission Control for managing multiple clusters, Tanzu Observability for.. observability, Tanzu Service Mesh for.. yes, it’s their service mesh, and many more. For anyone familiar with enterprise offering it may resemble any other product suite from a big giant like IBM, Oracle and so on. Let’s be honest here - Tanzu is not for anyone that is interested in “some” Kubernetes, it’s for enterprises accustomed to enterprise products and everything that comes with it (i.e. sales, support, software that can be downloaded only for authorized users, etc.). And it’s especially designed for those whose infrastructure is based on the VMware ecosystem - it’s a perfect addition that meets requirements of development teams within an organization, but also addresses operations teams concerts with the same tools that’s been known for over a decade now. When it comes to features they are pretty standard - easy authentication, cluster scaling, build services based on buildpacks, networking integrated with VMware NSX, storage integrated with vSphere - wait, it’s starting to sound like a feature list of another vSphere addon. I guess it is an addon. For those looking for fancy features I suggest waiting a bit more for VMware to come up with new Tanzu products (or for a new acquisition of another company from cloud native world like they did with Bitnami). When to choose Tanzu When your company already uses VMware vSphere - just contact your VMware sales guy who will prepare you an offer and the team that takes care of your infrastructure will do the rest If you don’t plan to deploy anything outside of your own infrastructure - although VMware tries to be a hybrid provider by enabling integration with AWS or GCP, it will stay focused on on-premise market where it’s undeniably the leader If you wish to use multiple clusters - Tanzu enables easy creation of Kubernetes clusters that can be assigned to development teams If you need support - it’s an enterprise product with enterprise support When to avoid Tanzu If you don’t have already vSphere in your organization - you need vSphere and its ecosystem, that Tanzu is a part of, to start working with VMware’s Kubernetes services; otherwise it will cost you a lot more time and resources to install it just to leverage them When you need more features integrated with the platform - although Tanzu provides interesting features (my favourite is Tanzu Build Service) it still lacks of some distinguished ones (although they provide some for you to install on your own from Solutions Hub) that would make it more appealing Conclusion I have chosen these four solutions for Kubernetes on-premise platform because I believe they provide a real alternative to custom-built clusters. These products make it easier to build and maintain production clusters, but also in many cases help to speed up the development process and provide insights for the deployment process as well. So here’s what I would do if I were to choose one: if I had a big budget I would go with OpenShift, as it’s just the best if I had a big budget and already existing VMware vSphere infrastructure I would consider Tanzu if I had skilled Kubernetes people in my organization and I wanted to have an easy way to manage my clusters (provisioned manually) without vSphere I would choose Rancher (and optionally I would buy a support for those clusters when going to prod) if I had skilled Kubernetes people in my organization and I would like to use these fancy OpenShift features I would go with OKD, as it’s the best alternative to custom-built Kubernetes cluster That’s not all. Of course you can build your own Kubernetes cluster and it’s a path that is chosen by many organizations. There are many caveats and conditions that need to be met (e.g. scale of such endeavour, type of workloads to be deployed on it) for this to succeed. But that’s a different story which I hope to cover in some other article.</summary></entry><entry><title type="html">Bezbłędni</title><link href="https://blog.cloudowski.com/pl/bezbledni/" rel="alternate" type="text/html" title="Bezbłędni" /><published>2021-01-09T00:00:00+01:00</published><updated>2021-01-09T00:00:00+01:00</updated><id>https://blog.cloudowski.com/pl/bezbledni</id><content type="html" xml:base="https://blog.cloudowski.com/pl/bezbledni/">&lt;p&gt;Mówiono nam, że tacy właśnie mamy być, aby odnieść sukces. Bezbłędni. Od początku w szkole i aż do pracy zawodowej. Ten artykuł poświęcam właśnie największej przeszkodzie stojącej na drodze do innowacji i sukcesu naszych firm oraz nas - niezależnie czy jesteśmy pracownikami, menedżerami, właścicielami firm czy też dopiero stawiamy pierwsze zawodowe kroki.&lt;br /&gt;
&lt;!--more--&gt;&lt;/p&gt;

&lt;h2 id=&quot;geneza&quot;&gt;Geneza&lt;/h2&gt;

&lt;p&gt;Obserwując nasze społeczeństwo i pamiętając jak wyglądało to jak byłem dzieckiem, później w szkole i w dalszych etapach mojego życia, mam wrażenie, że to nastawienie na błędy jest bardzo mocno zakorzenione w naszym społeczeństwie (albo przynajmniej w jego sporej części). Może to tylko moje doświadczenia, ale przeglądając internet trafiłem na ciekawy zapis jedynego źródła informacji jakie mieli nasi rodzice, dziadkowie w latach komunizmu - Dziennika Telewizyjnego (odpowiednik dzisiejszych Wiadomości). Polecam każdemu sprawdzić samodzielnie jak już wówczas społeczeństwo było nastawione na szukanie błędów. Komiczne są sytuacje, gdzie ze względu na przerwane łańcuchy dostaw i niedomagającą gospodarkę, redaktor prowadzący zaciekle szuka winy za brak chleba w sklepie u… kierowniczki tegoż. Oczywiście sytuacje takie zdarzają się również dzisiaj, ale przyzwyczajenie do szukania win zamiast rozwiązań jest z pewnością najłatwiejsze i najwygodniejsze, ale niezwykle ograniczające i nawet wręcz głupie jak się choć chwilę zatrzymać i zastanowić.&lt;/p&gt;

&lt;h2 id=&quot;dlaczego&quot;&gt;Dlaczego&lt;/h2&gt;

&lt;p&gt;Wizją takiego świata łatwo jest zarazić swoje najbliższe otoczenie. Jest ona na tyle kusząca, że zdejmuje z nas odpowiedzialność za podejmowanie ryzyka i ciężkiej pracy. Niesie ona też poczucie ulgi, gdy to komuś się nie udało i to jego “złapano” go na błędzie. To wspaniała ucieczka przed tą ocena i narażaniem się na śmieszność, krytykę i porażkę, a tak można w zaciszu swojej strefy komfortu usiąść w loży szyderców z innymi o podobnych lękach.  &lt;br /&gt;
Byłem tam. Lubiłem moją lożę i mój wyszukany sarkazm, który jest potężną bronią i tarczą zarazem. Pomaga w utwierdzaniu się o słuszności niewychodzenia i niekonfrontowania się z brutalną niekiedy rzeczywistością, szczególnie gdy rzeczywistość ta jest złożona właśnie z innych wytykających nasze błędy i czerpiących z tego niemałą satysfakcję.&lt;/p&gt;

&lt;h2 id=&quot;konsekwencje&quot;&gt;Konsekwencje&lt;/h2&gt;

&lt;p&gt;O jak wygodnie jest być w miejscu, gdzie jedynym wysiłkiem jest szukanie wad i błędów u innych! A to, bądźmy tutaj szczerzy, jest dość łatwe. Tkwiąc w tej sytuacji tracimy olbrzymią część naszego potencjału. Przekierowujemy naszą energię nie tam gdzie powinniśmy. Unikamy ryzyka, nawet gdy jest ono niewielkie. Jesteśmy ciągle w pozycji obronnej i ważymy nasze decyzje, aby tylko nie popełnić błędu, który będzie nam wytknięty. W naszych głowach świat tylko czyha na nasze potknięcie i pokazanie nam naszej nieudolności.&lt;br /&gt;
Ciągła pozycja obronna nie daje nam możliwości na atak. Taka ofensywa związana z arsenałem jakim dysponujemy teraz albo arsenał jakiego nam brakuje, aby wyjść z okopów i zmierzyć się ze światem takim jakim on jest. Zobaczyć, że poza okopami jest wielu ludzi, którzy też z nich wyszli i zmierzają się ze swoimi słabościami oraz bronią się dzielnie również przed atakami tych, którzy z tych okopów nigdy nie wyjdą, ale z zaciekłością ich ostrzeliwują.&lt;/p&gt;

&lt;h2 id=&quot;alternatywy&quot;&gt;Alternatywy&lt;/h2&gt;

&lt;p&gt;Poza &lt;a href=&quot;https://en.wikipedia.org/wiki/Citizenship_in_a_Republic&quot;&gt;cytatem&lt;/a&gt; Teodora Roosvelta, w którym porównuje on tą walkę do wyjścia na ring, bardzo utwkił mi cytat Thomasa Watsona, dawnego szefa IBM. Zapytany czy zwolni pracownika, którego błąd kosztował firmę $600k, odpowiedział, że właśnie wydał te $600k na przeszkolenie tego pracownika i dziwi się jak mógłby tak wyszkolonego pracownika zwolnić.&lt;br /&gt;
To wymaga radykalnej wręcz zmiany wizji świata. To wymaga akceptacji świata takim jakim jest. Wiem, to brzmi jak banał, ale pomaga wyjść z okopów. Jesteśmy tylko ludźmi i stworzeniami niedoskonałymi z założenia. Nawet natura popełnia błędy - w końcu sama ewolucja to pasmo prób i błędów. Wciąż zdarzają się potknięcia, choroby, katastrofy i rzeczy, na które nie mamy wpływu. Akceptujemy to jakim jest i szukamy rozwiązań, aby przeciwdziałać kataklizmom albo przynajmniej zminimalizować ich skutki. Dlaczego więc nie mielibyśmy postępować podobnie w przypadku, gdy mamy większy wpływ? Bo czyż nie jesteśmy sami w sobie taką małą ewolucją tylko o ograniczonym czasowo działaniu?&lt;/p&gt;

&lt;h2 id=&quot;konstruktywna-krytyka-a-oderwanie-od-rzeczywistości&quot;&gt;Konstruktywna krytyka, a oderwanie od rzeczywistości&lt;/h2&gt;

&lt;p&gt;Jak odróżnić krzyki narzekaczy od zgłaszanych faktycznych problemów związanych z naszą działalnością? Łatwo jest wszakże uznać, że każda krytyka jest bezpodstawna, a stoi za nią zawiść i inne emocje niekoniecznie związane z faktami. Tutaj pomocnym wydaje się zdrowe podejście związane z wiarygodnością źródła takich opinii. Zdecydowanie bardziej wiarygodni będą ci, którzy już są na ringu od wielu lat i przeżyli wiele bitew zdobywając więcej doświadczenia. Warto mieć grupę takich osób, które bez ogródek sprowadzą nas na ziemię kiedy trzeba i podpowiedzą rozwiązania, gdy faktycznie jest potrzebne działanie korygujące.&lt;br /&gt;
Gorzej jest z krzykaczami i hejterami. Ich wiarygodność jest niska, ale zaangażowanie niezwykle wysokie. Ich działalność to ekstremalny przykład ludzi, którzy nigdy nie zamierzają wyjść i spróbować swoich sił, gdyż są za słabi. Wierzę jednak i widzę po różnych przykładach, że ich głos jest ledwo słyszalny gdy nie ma podstaw merytorycznych. Stąd warto skupiać się na dialogu z tymi, którym zależy na tym co tworzymy i słuchać ich głosu, szczególnie gdy nasze usługi adresowane są do szerszego grona odbiorców. Dobry produkt obroni się sam.&lt;/p&gt;

&lt;h2 id=&quot;nasze-miejsce-jest-na-ringu&quot;&gt;Nasze miejsce jest na ringu&lt;/h2&gt;

&lt;p&gt;Zawsze będą tacy co wybiorą lożę szyderców będą podśmiewywać się z potknięć i upadków innych. Pozwólmy im tam zostać, a sami wyjść na ring, bo tam jest nasze miejsce. Tam czekają wzloty i upadki, ale to też tam marzenia są przekładane na rzeczywistość.&lt;br /&gt;
Podziwiam tych co wychodzą codziennie na ring i używają swojej energii na tworzenie wartości. Trzymam kciuki, aby było nas coraz więcej!&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="perfekcjonizm" /><summary type="html">Mówiono nam, że tacy właśnie mamy być, aby odnieść sukces. Bezbłędni. Od początku w szkole i aż do pracy zawodowej. Ten artykuł poświęcam właśnie największej przeszkodzie stojącej na drodze do innowacji i sukcesu naszych firm oraz nas - niezależnie czy jesteśmy pracownikami, menedżerami, właścicielami firm czy też dopiero stawiamy pierwsze zawodowe kroki.</summary></entry><entry><title type="html">Zosie Samosie</title><link href="https://blog.cloudowski.com/pl/zosie-samosie/" rel="alternate" type="text/html" title="Zosie Samosie" /><published>2020-12-08T00:00:00+01:00</published><updated>2020-12-08T00:00:00+01:00</updated><id>https://blog.cloudowski.com/pl/zosie-samosie</id><content type="html" xml:base="https://blog.cloudowski.com/pl/zosie-samosie/">&lt;p&gt;Nie mam nic do Zoś. Moja babcia miała tak na imię i część dzieci moich znajomych nadaje je swoim córkom, bo to naprawdę piękne imię. Od mojego dzieciństwa tkwiło mi ono w głowie nie tylko w kontekście rodzinnym, ale również z powodu wiersza Juliana Tuwima “Zosia Samosia”. Idealnie opisuje on często spotykane podejście i nazwijmy go “samosizm”.&lt;/p&gt;

&lt;p&gt;Ów samosizm pomagał nam, Polakom w radzeniu sobie w trudnych czasach PRLu, gdzie trzeba było zdać się na siebie ze względu na brak materiałów i przede wszystkim pieniędzy. Stąd popularność ówczesnych programów typu “Zrób to sam” i propagowanie, również przez ówczesne władze, podejścia, że da się wszystko zrobić dobrymi chęciami. W końcu “Polak potrafi” jest mocno w nas zakorzenione i z dumą powtarzamy je przy różnego rodzaju wyzwaniach zawodowych.&lt;br /&gt;
Można byłoby szukać dalej przyczyn popularności naszego samosizmu, ale mnie bardziej interesują skutki, jakie to niesie dla nas w kontekście rodzimego rynku IT. Ta łatka samodzielności, którą nosimy jest również zauważna przez zachodnie firmy, które z chęcią przyjmują Polaków do trudnych zadań, bo kto jak nie my da radę. I ta zaradność to jest wspaniała cecha. Naprawdę! Zbudowaliśmy na niej wiele i pewnie zbudujemy jeszcze więcej, chcąc niejako nadrobić ten stracony czas PRLu, dogonić, a może i przegonić rozwinięte kraje zachodu.&lt;/p&gt;

&lt;p&gt;To co mnie martwi to fakt, że takie podejście świetnie się sprawdziło, aby dojść do pewnego poziomu. Wydaje mi się, że już go osiągnęliśmy. Już nie musimy udowadniać, że potrafimy. Teraz potrzebujemy czegoś więcej. Potrzebujemy innowacyjności, a wytworzenie jej jest o wiele trudniejsze. I ten samosizm jest tutaj przeszkodą. To co kiedyś nam pomagało w odbudowaniu pewności siebie, teraz spowalnia nas w drodze do innowacji, która decyduje o tym czy dany kraj jest tylko konsumentem czy też producentem konkurencyjnych produktów. Teraz jesteśmy świetnym podwykonawcą produktów wymyślonych gdzieś na zachodzie. Potrzebujemy jednak zostać pomysłodawcą i opcjonalnie również wytwórcą. Do tego jednak potrzeba akceptacji faktu, że nie da się robić wszystkiego samodzielnie. Niezależnie od pobudek od tego faktycznego udowadniania, że potrafimy, aż po szukanie oszczędności, to brak takiej akceptacji jest wciskaniem hamulca na innowację w wyścigu, którego skutki możemy odczuć w najbliższych latach. Rozpraszanie energii na samodzielne budowanie niekrytycznych elementów produktu jest tutaj zbędne.
Potrzebujemy zacząć korzystać z pomocy z zewnątrz - czy to jest gotowy produkt, za który trzeba płacić, czy też usługi doradcze i inne odciążające nas, a pozwalające skupić się na tworzeniu unikalnej wartości. Pomocne jest tutaj pytanie - czy faktycznie to co chcesz zrobić sam jest rdzenną częścią Twojego produktu? Skupienie się na faktycznie najważniejszej części biznesu może zwieloktronić tempo jego rozwoju, a problemy pojawiające się w innych obszarach objętych opieką przez zewnętrzne produkty lub usługi mogą być zaadresowane często o wiele szybciej i lepiej.&lt;/p&gt;

&lt;p&gt;Polskie firmy muszą być bardziej konkurencyjne. Nie możemy pozostawać w roli podwykonawców, niezależnie jak dobrych i docenianych przez zachodnie firmy. Innowacja to cięzka praca, ale też pełne skupienie się na jej tworzeniu. Czas powiedzieć STOP samosizmowi.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="perfekcjonizm" /><summary type="html">Nie mam nic do Zoś. Moja babcia miała tak na imię i część dzieci moich znajomych nadaje je swoim córkom, bo to naprawdę piękne imię. Od mojego dzieciństwa tkwiło mi ono w głowie nie tylko w kontekście rodzinnym, ale również z powodu wiersza Juliana Tuwima “Zosia Samosia”. Idealnie opisuje on często spotykane podejście i nazwijmy go “samosizm”. Ów samosizm pomagał nam, Polakom w radzeniu sobie w trudnych czasach PRLu, gdzie trzeba było zdać się na siebie ze względu na brak materiałów i przede wszystkim pieniędzy. Stąd popularność ówczesnych programów typu “Zrób to sam” i propagowanie, również przez ówczesne władze, podejścia, że da się wszystko zrobić dobrymi chęciami. W końcu “Polak potrafi” jest mocno w nas zakorzenione i z dumą powtarzamy je przy różnego rodzaju wyzwaniach zawodowych. Można byłoby szukać dalej przyczyn popularności naszego samosizmu, ale mnie bardziej interesują skutki, jakie to niesie dla nas w kontekście rodzimego rynku IT. Ta łatka samodzielności, którą nosimy jest również zauważna przez zachodnie firmy, które z chęcią przyjmują Polaków do trudnych zadań, bo kto jak nie my da radę. I ta zaradność to jest wspaniała cecha. Naprawdę! Zbudowaliśmy na niej wiele i pewnie zbudujemy jeszcze więcej, chcąc niejako nadrobić ten stracony czas PRLu, dogonić, a może i przegonić rozwinięte kraje zachodu. To co mnie martwi to fakt, że takie podejście świetnie się sprawdziło, aby dojść do pewnego poziomu. Wydaje mi się, że już go osiągnęliśmy. Już nie musimy udowadniać, że potrafimy. Teraz potrzebujemy czegoś więcej. Potrzebujemy innowacyjności, a wytworzenie jej jest o wiele trudniejsze. I ten samosizm jest tutaj przeszkodą. To co kiedyś nam pomagało w odbudowaniu pewności siebie, teraz spowalnia nas w drodze do innowacji, która decyduje o tym czy dany kraj jest tylko konsumentem czy też producentem konkurencyjnych produktów. Teraz jesteśmy świetnym podwykonawcą produktów wymyślonych gdzieś na zachodzie. Potrzebujemy jednak zostać pomysłodawcą i opcjonalnie również wytwórcą. Do tego jednak potrzeba akceptacji faktu, że nie da się robić wszystkiego samodzielnie. Niezależnie od pobudek od tego faktycznego udowadniania, że potrafimy, aż po szukanie oszczędności, to brak takiej akceptacji jest wciskaniem hamulca na innowację w wyścigu, którego skutki możemy odczuć w najbliższych latach. Rozpraszanie energii na samodzielne budowanie niekrytycznych elementów produktu jest tutaj zbędne. Potrzebujemy zacząć korzystać z pomocy z zewnątrz - czy to jest gotowy produkt, za który trzeba płacić, czy też usługi doradcze i inne odciążające nas, a pozwalające skupić się na tworzeniu unikalnej wartości. Pomocne jest tutaj pytanie - czy faktycznie to co chcesz zrobić sam jest rdzenną częścią Twojego produktu? Skupienie się na faktycznie najważniejszej części biznesu może zwieloktronić tempo jego rozwoju, a problemy pojawiające się w innych obszarach objętych opieką przez zewnętrzne produkty lub usługi mogą być zaadresowane często o wiele szybciej i lepiej. Polskie firmy muszą być bardziej konkurencyjne. Nie możemy pozostawać w roli podwykonawców, niezależnie jak dobrych i docenianych przez zachodnie firmy. Innowacja to cięzka praca, ale też pełne skupienie się na jej tworzeniu. Czas powiedzieć STOP samosizmowi.</summary></entry><entry><title type="html">How to modify containers without rebuilding their image</title><link href="https://blog.cloudowski.com/articles/how-to-modify-containers-wihtout-rebuilding/" rel="alternate" type="text/html" title="How to modify containers without rebuilding their image" /><published>2020-09-26T00:00:00+02:00</published><updated>2020-09-26T00:00:00+02:00</updated><id>https://blog.cloudowski.com/articles/how-to-modify-containers-wihtout-rebuilding</id><content type="html" xml:base="https://blog.cloudowski.com/articles/how-to-modify-containers-wihtout-rebuilding/">&lt;p&gt;Containers are a beautiful piece of technology that ease the development of modern applications  and also the maintenance of modern environments. One thing that draws many people to them is how they reduce the time required to set up a service, or a whole environment, with everything included. It is possible mainly because there are so many container images available and ready to use. You will probably  need to build your own container images with your applications, but many containers in your environment will use prebuilt images prepared by someone else. It’s especially worth considering for software that is provided by the software vendor or a trusted group of developers like it has been done in the case of “official” images published on Docker Hub. In both cases, it makes your life easier by letting someone else take care of updates, packaging new versions, and making sure it works.&lt;br /&gt;
But what if you want to change something in those images? Maybe it’s a minor change or something bigger that is specific for your particular usage of the service. The first instinct may tell you to rebuild that image. This, however, brings some overhead - these images will have to be published, rebuilt when new upstream versions are published, and you lose most of the benefits that come with those prebuilt versions.&lt;br /&gt;
There is an alternative to that - actually, I found four of them which I will describe below. These solutions will allow you to keep all the benefits and adjust the behavior of running containers in a seamless way.&lt;/p&gt;

&lt;h2 id=&quot;method-1---init-containers&quot;&gt;Method 1 - init-containers&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://kubernetes.io/docs/concepts/workloads/pods/init-containers/&quot;&gt;Init-containers&lt;/a&gt; were created to provide additional functionality to the main container (or containers) defined in a Pod. They are executed before the main container and can use a different container image. In case of any failure, they will prevent the main container from starting. All logs can be easily retrieved and troubleshooting is fairly simple - they are fetched just like any other container defined in a Pod by providing its name. This methods is quiote popular among services such as databases to initialize and configure them based on configuration parameters.&lt;/p&gt;

&lt;h4 id=&quot;example&quot;&gt;Example&lt;/h4&gt;

&lt;p&gt;The following example uses a dedicated empty volume for storing data initialized by an init-container. In this specific case, it’s just a simple “echo” command, but in a real-world scenario, this can be a script that does something more complex.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apps/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Deployment&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx-init&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;matchLabels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;initContainers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;prepare-webpage&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;busybox:1.28&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sh&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-c&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;args&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;
              &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;set&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-x;&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;echo&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'&amp;lt;h2&amp;gt;Page&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;prepared&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;by&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;an&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;init&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;container&amp;lt;/h2&amp;gt;'&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/web/index.html;&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;echo&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'Init&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;finished&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;successfully'&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;volumeMounts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;mountPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/web&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx:1.19&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;volumeMounts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;mountPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/usr/share/nginx/html/&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;80&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;http&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;emptyDir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;method-2---post-start-hook&quot;&gt;Method 2 - post-start hook&lt;/h2&gt;

&lt;p&gt;A Post-start &lt;a href=&quot;https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/&quot;&gt;hook&lt;/a&gt; can be used to execute some action just after the main container starts. It can be either a script executed in the same context as the container or an HTTP request that is executed against a defined endpoint. In most cases, it would probably be a shell script. Pod stays in the &lt;em&gt;ContainerCreating&lt;/em&gt; state until this script ends. It can be tricky to debug since there are no logs available. There are more &lt;a href=&quot;https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-delivery-guarantees&quot;&gt;caveats&lt;/a&gt; and this should be used only for simple, non-invasive actions. The best feature of this method is that the script is executed when the service in the main container starts and can be used to interact with the service (e.g. by executing some API requests). With a proper readinessProbe configuration, this can give a nice way of initializing the application before any requests are allowed.&lt;/p&gt;

&lt;h4 id=&quot;example-1&quot;&gt;Example&lt;/h4&gt;

&lt;p&gt;In the following example a post-start hook executes the &lt;code class=&quot;highlighter-rouge&quot;&gt;echo&lt;/code&gt; command, but again - this can be anything that uses the same set of files available on the container filesystem in order to perform some sort of initialization.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apps/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Deployment&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx-hook&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;matchLabels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx:1.19&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;80&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;http&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;lifecycle&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;postStart&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
                  &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;
                    &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sh&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt;
                    &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-c&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt;
                    &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sleep&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;5;set&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-x;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;echo&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'&amp;lt;h2&amp;gt;Page&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;prepared&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;by&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;a&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;PostStart&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;hook&amp;lt;/h2&amp;gt;'&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/usr/share/nginx/html/index.html&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt;
                  &lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;method-3---sidecar-container&quot;&gt;Method 3 - sidecar container&lt;/h2&gt;

&lt;p&gt;This method leverages the concept of the Pod where multiple containers run at the same time sharing IPC and network kernel namespaces. It’s been widely used in the Kubernetes ecosystem by projects such as Istio, Consul Connect, and many others. The assumption here is that all containers run simultaneously which makes it a little bit tricky to use a sidecar container to modify the behaviour of the main container. But it’s doable and it can be used to interact with the running application or a service. I’ve been using this feature with the &lt;a href=&quot;https://github.com/jenkinsci/helm-charts/tree/main/charts/jenkins&quot;&gt;Jenkins helm chart&lt;/a&gt; where there’s a sidecar container responsible for reading ConfigMap objects with Configuration-as-Code config entries.&lt;/p&gt;

&lt;h4 id=&quot;example-2&quot;&gt;Example&lt;/h4&gt;

&lt;p&gt;Nothing new here, just the “echo” command with a little caveat - since sidecar containers must obey &lt;code class=&quot;highlighter-rouge&quot;&gt;restartPolicy&lt;/code&gt; setting, they must run after they finish their actions and thus it uses a simple &lt;code class=&quot;highlighter-rouge&quot;&gt;while&lt;/code&gt; infinite loop. In more advanced cases this would be rather some small daemon (or a loop that checks some state) that runs like a service.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apps/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Deployment&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx-sidecar&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;matchLabels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx:1.19&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;volumeMounts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;mountPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/usr/share/nginx/html/&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;80&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;http&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;prepare-webpage&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;busybox:1.28&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sh&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-c&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;args&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;
              &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;set&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-x;&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;echo&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'&amp;lt;h2&amp;gt;Page&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;prepared&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;by&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;a&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sidecar&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;container&amp;lt;/h2&amp;gt;'&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/web/index.html;&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;while&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;:;do&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sleep&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;9999;done&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;volumeMounts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;mountPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/web&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;emptyDir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;method-4---entrypoint&quot;&gt;Method 4 - entrypoint&lt;/h2&gt;

&lt;p&gt;The last method uses the same container image and is similar to the Post-start hook except it runs before the main app or service. As you probably know in every container image there is an &lt;code class=&quot;highlighter-rouge&quot;&gt;ENTRYPOINT&lt;/code&gt; command defined (explicitly or &lt;a href=&quot;https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact&quot;&gt;implicitly&lt;/a&gt;) and we can leverage it to execute some arbitrary scripts. It is often used by many official images and in this method we will just prepend our own script to modify the behavior of the main container. In more advanced scenarios you could actually provide a modified version of the original entrypoint file.&lt;/p&gt;

&lt;h4 id=&quot;example-3&quot;&gt;Example&lt;/h4&gt;

&lt;p&gt;This method is a little bit more complex and involves creating a ConfigMap with a script content that is executed before the main entrypoint. Our script for modifying nginx entrypoint is embedded in the following ConfigMap&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ConfigMap&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;scripts&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;data&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;prestart-script.sh&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|-&lt;/span&gt;
    &lt;span class=&quot;no&quot;&gt;#!/usr/bin/env bash&lt;/span&gt;

    &lt;span class=&quot;no&quot;&gt;echo '&amp;lt;h2&amp;gt;Page prepared by a script executed before entrypoint container&amp;lt;/h2&amp;gt;' &amp;gt; /usr/share/nginx/html/index.html&lt;/span&gt;

    &lt;span class=&quot;no&quot;&gt;exec /docker-entrypoint.sh nginx -g &quot;daemon off;&quot; # it's &quot;ENTRYPOINT CMD&quot; extracted from the main container image definition&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;One thing that is very important is the last line with &lt;code class=&quot;highlighter-rouge&quot;&gt;exec&lt;/code&gt;. It executes the original entrypoint script and must match it exactly as it is defined in the Dockerfile. In this case it requires additional arguments that are defined in the &lt;a href=&quot;https://github.com/nginxinc/docker-nginx/blob/1.19.2/stable/buster/Dockerfile#L110&quot;&gt;CMD&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now let’s define the Deployment object&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apps/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Deployment&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx-script&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;matchLabels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx:1.19&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;bash&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-c&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/scripts/prestart-script.sh&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;80&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;http&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;volumeMounts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;mountPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/scripts&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;scripts&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;scripts&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;configMap&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;scripts&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;defaultMode&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;0755&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# &amp;lt;- this is important&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That is pretty straightforward - we override the entrypoint with &lt;code class=&quot;highlighter-rouge&quot;&gt;command&lt;/code&gt; and we also must make sure our script is mounted with proper permissions (thus &lt;code class=&quot;highlighter-rouge&quot;&gt;defaultMode&lt;/code&gt; needs to be defined).&lt;/p&gt;

&lt;h2 id=&quot;comparison-table&quot;&gt;Comparison table&lt;/h2&gt;

&lt;p&gt;Here’s the table that summarizes the differences between the aforementioned methods:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;Init-containers&lt;/th&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;Post-start hook&lt;/th&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;Sidecar container&lt;/th&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;Entrypoint&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Can connect to the main process&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;❌&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;❌&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Can use a different image&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;❌&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Easy to debug&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;❌&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Stops the main container when fails&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;❌&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;Containers are about reusability and often it’s much easier to make small adjustments without rebuilding the whole container image and take over the responsibility of publishing and maintaining it. It’s just an implementation of the &lt;a href=&quot;https://en.wikipedia.org/wiki/KISS_principle&quot;&gt;KISS principle&lt;/a&gt;.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="kubernetes" /><category term="containers" /><category term="docker" /><summary type="html">Containers are a beautiful piece of technology that ease the development of modern applications and also the maintenance of modern environments. One thing that draws many people to them is how they reduce the time required to set up a service, or a whole environment, with everything included. It is possible mainly because there are so many container images available and ready to use. You will probably need to build your own container images with your applications, but many containers in your environment will use prebuilt images prepared by someone else. It’s especially worth considering for software that is provided by the software vendor or a trusted group of developers like it has been done in the case of “official” images published on Docker Hub. In both cases, it makes your life easier by letting someone else take care of updates, packaging new versions, and making sure it works. But what if you want to change something in those images? Maybe it’s a minor change or something bigger that is specific for your particular usage of the service. The first instinct may tell you to rebuild that image. This, however, brings some overhead - these images will have to be published, rebuilt when new upstream versions are published, and you lose most of the benefits that come with those prebuilt versions. There is an alternative to that - actually, I found four of them which I will describe below. These solutions will allow you to keep all the benefits and adjust the behavior of running containers in a seamless way. Method 1 - init-containers Init-containers were created to provide additional functionality to the main container (or containers) defined in a Pod. They are executed before the main container and can use a different container image. In case of any failure, they will prevent the main container from starting. All logs can be easily retrieved and troubleshooting is fairly simple - they are fetched just like any other container defined in a Pod by providing its name. This methods is quiote popular among services such as databases to initialize and configure them based on configuration parameters. Example The following example uses a dedicated empty volume for storing data initialized by an init-container. In this specific case, it’s just a simple “echo” command, but in a real-world scenario, this can be a script that does something more complex. apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx-init spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: initContainers: - name: prepare-webpage image: busybox:1.28 command: [&quot;sh&quot;, &quot;-c&quot;] args: [ &quot;set -x; echo '&amp;lt;h2&amp;gt;Page prepared by an init container&amp;lt;/h2&amp;gt;' &amp;gt; /web/index.html; echo 'Init finished successfully' &quot;, ] volumeMounts: - mountPath: /web name: web containers: - image: nginx:1.19 name: nginx volumeMounts: - mountPath: /usr/share/nginx/html/ name: web ports: - containerPort: 80 name: http volumes: - name: web emptyDir: {} Method 2 - post-start hook A Post-start hook can be used to execute some action just after the main container starts. It can be either a script executed in the same context as the container or an HTTP request that is executed against a defined endpoint. In most cases, it would probably be a shell script. Pod stays in the ContainerCreating state until this script ends. It can be tricky to debug since there are no logs available. There are more caveats and this should be used only for simple, non-invasive actions. The best feature of this method is that the script is executed when the service in the main container starts and can be used to interact with the service (e.g. by executing some API requests). With a proper readinessProbe configuration, this can give a nice way of initializing the application before any requests are allowed. Example In the following example a post-start hook executes the echo command, but again - this can be anything that uses the same set of files available on the container filesystem in order to perform some sort of initialization. apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx-hook spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:1.19 name: nginx ports: - containerPort: 80 name: http lifecycle: postStart: exec: command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;sleep 5;set -x; echo '&amp;lt;h2&amp;gt;Page prepared by a PostStart hook&amp;lt;/h2&amp;gt;' &amp;gt; /usr/share/nginx/html/index.html&quot;, ] Method 3 - sidecar container This method leverages the concept of the Pod where multiple containers run at the same time sharing IPC and network kernel namespaces. It’s been widely used in the Kubernetes ecosystem by projects such as Istio, Consul Connect, and many others. The assumption here is that all containers run simultaneously which makes it a little bit tricky to use a sidecar container to modify the behaviour of the main container. But it’s doable and it can be used to interact with the running application or a service. I’ve been using this feature with the Jenkins helm chart where there’s a sidecar container responsible for reading ConfigMap objects with Configuration-as-Code config entries. Example Nothing new here, just the “echo” command with a little caveat - since sidecar containers must obey restartPolicy setting, they must run after they finish their actions and thus it uses a simple while infinite loop. In more advanced cases this would be rather some small daemon (or a loop that checks some state) that runs like a service. apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx-sidecar spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:1.19 name: nginx volumeMounts: - mountPath: /usr/share/nginx/html/ name: web ports: - containerPort: 80 name: http - name: prepare-webpage image: busybox:1.28 command: [&quot;sh&quot;, &quot;-c&quot;] args: [ &quot;set -x; echo '&amp;lt;h2&amp;gt;Page prepared by a sidecar container&amp;lt;/h2&amp;gt;' &amp;gt; /web/index.html; while :;do sleep 9999;done &quot;, ] volumeMounts: - mountPath: /web name: web volumes: - name: web emptyDir: {} Method 4 - entrypoint The last method uses the same container image and is similar to the Post-start hook except it runs before the main app or service. As you probably know in every container image there is an ENTRYPOINT command defined (explicitly or implicitly) and we can leverage it to execute some arbitrary scripts. It is often used by many official images and in this method we will just prepend our own script to modify the behavior of the main container. In more advanced scenarios you could actually provide a modified version of the original entrypoint file. Example This method is a little bit more complex and involves creating a ConfigMap with a script content that is executed before the main entrypoint. Our script for modifying nginx entrypoint is embedded in the following ConfigMap apiVersion: v1 kind: ConfigMap metadata: name: scripts data: prestart-script.sh: |- #!/usr/bin/env bash echo '&amp;lt;h2&amp;gt;Page prepared by a script executed before entrypoint container&amp;lt;/h2&amp;gt;' &amp;gt; /usr/share/nginx/html/index.html exec /docker-entrypoint.sh nginx -g &quot;daemon off;&quot; # it's &quot;ENTRYPOINT CMD&quot; extracted from the main container image definition One thing that is very important is the last line with exec. It executes the original entrypoint script and must match it exactly as it is defined in the Dockerfile. In this case it requires additional arguments that are defined in the CMD. Now let’s define the Deployment object apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx-script spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:1.19 name: nginx command: [&quot;bash&quot;, &quot;-c&quot;, &quot;/scripts/prestart-script.sh&quot;] ports: - containerPort: 80 name: http volumeMounts: - mountPath: /scripts name: scripts volumes: - name: scripts configMap: name: scripts defaultMode: 0755 # &amp;lt;- this is important That is pretty straightforward - we override the entrypoint with command and we also must make sure our script is mounted with proper permissions (thus defaultMode needs to be defined). Comparison table Here’s the table that summarizes the differences between the aforementioned methods:   Init-containers Post-start hook Sidecar container Entrypoint Can connect to the main process ❌ ✅ ✅ ❌ Can use a different image ✅ ❌ ✅ ✅ Easy to debug ✅ ❌ ✅ ✅ Stops the main container when fails ✅ ✅ ❌ ✅ Conclusion Containers are about reusability and often it’s much easier to make small adjustments without rebuilding the whole container image and take over the responsibility of publishing and maintaining it. It’s just an implementation of the KISS principle.</summary></entry><entry><title type="html">The challenges of multi-cloud environments</title><link href="https://blog.cloudowski.com/articles/the-challenges-of-multi-cloud-environments-copy/" rel="alternate" type="text/html" title="The challenges of multi-cloud environments" /><published>2020-09-06T00:00:00+02:00</published><updated>2020-09-06T00:00:00+02:00</updated><id>https://blog.cloudowski.com/articles/the-challenges-of-multi-cloud-environments%20copy</id><content type="html" xml:base="https://blog.cloudowski.com/articles/the-challenges-of-multi-cloud-environments-copy/">&lt;p&gt;When this all IT revolution began, we started with one computer that was the size of a room, then we invented server rooms, we started dividing servers into virtual machines, but apparently it wasn’t good enough.  Then the cloud revolution came and it has been a game changer since then. With cloud computing we got self-service through API calls that enable us to create various resources in different parts of the world. What an excellent and convenient solution! Why would anyone want more?&lt;br /&gt;
It turns out that there are a couple of reasons why one would want to move to another level - &lt;strong&gt;multi-cloud&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id=&quot;reasons-for-going-multi-cloud&quot;&gt;Reasons for going multi-cloud&lt;/h2&gt;

&lt;p&gt;There are many reasons and I chose the most important ones that have the biggest impact during the decision making process.&lt;/p&gt;

&lt;h3 id=&quot;costs&quot;&gt;Costs&lt;/h3&gt;

&lt;p&gt;Although most popular cloud services are comparable when it comes to costs, there could be slight differences between cloud providers, especially when considering geographical placement. And besides,  on a larger scale a few percent cheaper virtual machines could save thousands of dollars in total, which is something worth considering.&lt;br /&gt;
When using multiple cloud providers you can also leverage the fact that you are able to move your workload to a competitor and negotiate better terms of your contract.&lt;/p&gt;

&lt;h3 id=&quot;vendor-lock-in-avoidance&quot;&gt;Vendor lock-in avoidance&lt;/h3&gt;

&lt;p&gt;Many companies tend to avoid a vendor lock-in situation and even if they have their preferred cloud provider, it is a matter of the multi-cloud policy that assures the portability of your services to the other cloud provider. This also has a positive side effect - by using multiple cloud providers you can compare which of them are better in terms of not only cost, but also when it comes to reliability (SLAs), stability, speed, and other factors your organization finds important.&lt;/p&gt;

&lt;h3 id=&quot;broad-saas-offering&quot;&gt;Broad SaaS offering&lt;/h3&gt;

&lt;p&gt;When providing a product in a SaaS model, it is mandatory to have an offering that is available on all major cloud platforms. It is crucial, especially if you offer it by providing direct access from your customers’ environments by linking their networks with a dedicated environment where you put your software. This is a fairly popular model used by companies providing services which are sensitive to latencies such as databases.&lt;/p&gt;

&lt;h3 id=&quot;higher-availability&quot;&gt;Higher availability&lt;/h3&gt;

&lt;p&gt;No cloud provider can give you 100% availability and how many &lt;em&gt;“nines”&lt;/em&gt; they can offer in their SLAs depends on the service type and sometimes its tier. So to increase availability of your service you may need to leverage not only multiple regions, but also multiple cloud providers. Sometimes you may need to use a particular geographic region and an outage of a crucial service can affect your services which can be avoided by spreading them out into different cloud providers since they place their regions in similar locations.&lt;/p&gt;

&lt;h3 id=&quot;leverage-unique-services&quot;&gt;Leverage unique services&lt;/h3&gt;

&lt;p&gt;Not every cloud provider offers the same set of services and this is the reason why you may want to use this particular service even if you’re running most of your applications on a single cloud provider. This will require not only setting up the secondary provider, learn how to manage it, but will also open up new opportunities to test and compare other services since you’ve come such a long way.&lt;/p&gt;

&lt;h2 id=&quot;kubernetes-as-a-multi-cloud-service&quot;&gt;Kubernetes as a multi-cloud service&lt;/h2&gt;

&lt;p&gt;Now I will focus on a small, but very crucial part of cloud services - Kubernetes services with all the services and challenges behind it. It is important especially for organizations which run their software or the software they are provided by external vendors. Nowadays it’s not only software that is eating the world, but also containers have started to taking a huge bite of it.&lt;/p&gt;

&lt;h2 id=&quot;when-one-is-enough&quot;&gt;When one is enough&lt;/h2&gt;

&lt;p&gt;There are cases where multi-cloud can bring a lot of benefits, but on the other hand there are dozens where it just makes no or little sense. It is especially valid when using Kubernetes which is designed for distributed systems and can leverage underlying cloud infrastructure to provide the following features:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;easy scalability of applications running as containers&lt;/li&gt;
  &lt;li&gt;high availability and resilience thanks to the scheduler and its features (i.e. pod affinity and anti-affinity rules, node affinity rules and automatic distribution among different availability zones)&lt;/li&gt;
  &lt;li&gt;geographical distribution using multiple clusters and service mesh connecting them together (additionally there will be a centralized control plane for those distributed clusters - &lt;a href=&quot;https://github.com/kubernetes-sigs/kubefed&quot;&gt;kubefed&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If only that solved all of the challenges that are not that easily addressed by Kubernetes itself, but might be solved with other ways.&lt;/p&gt;

&lt;h2 id=&quot;challenge-1---storage&quot;&gt;Challenge 1 - storage&lt;/h2&gt;

&lt;p&gt;This is probably &lt;strong&gt;the biggest challenge&lt;/strong&gt; of all - how to provide a consistent storage used by these distributed systems which can work not only in various regions but also in different cloud providers?&lt;br /&gt;
Here’s the simplest solution - just don’t. If you can skip this part and design your systems in a way that would not require storage synchronization then you’ll save yourself a lot of time and potential problems that will eventually come up.&lt;/p&gt;

&lt;p&gt;In case you really need to have everything in-sync between all sites and regions think it through once again. Then, if you really, I mean really, really want it, then consider choosing a storage that would implement synchronization between multiple sites.
One of the most interesting projects for distributed databases is &lt;a href=&quot;https://vitess.io/&quot;&gt;Vitess&lt;/a&gt; based on a solution used by YouTube and is compatible with standard SQL databases. It is based on MySQL so if that’s fine for you then you can start experimenting with it to create a multi-region and multi-cloud solution that will span across multiple sites (e.g. multiple Kubernetes clusters). For Kubernetes there is even an &lt;a href=&quot;https://github.com/vitessio/vitess-operator&quot;&gt;operator&lt;/a&gt; that makes it quite easy to set up.
&lt;a href=&quot;http://cassandra.apache.org/&quot;&gt;Cassandra&lt;/a&gt; is an alternative to the Vitess which is more mature, but also not MySQL-compatible and requires you to design your app specifically for that type of database. There are a couple of operators as well - &lt;a href=&quot;https://github.com/instaclustr/cassandra-operator&quot;&gt;this&lt;/a&gt; one and &lt;a href=&quot;https://github.com/datastax/cass-operator&quot;&gt;this&lt;/a&gt; provided by Datastax.&lt;br /&gt;
It is also worth to mention projects such as &lt;a href=&quot;https://rook.io/&quot;&gt;Rook&lt;/a&gt; or &lt;a href=&quot;https://openebs.io/&quot;&gt;OpenEBS&lt;/a&gt; which can provide a low-level solution on which it is possible to build something more universal.&lt;/p&gt;

&lt;h2 id=&quot;challenge-2---networking&quot;&gt;Challenge 2 - networking&lt;/h2&gt;

&lt;p&gt;Placing your applications in multiple regions and cloud providers could in fact be quite easy with Kubernetes and the next step is to put some traffic to these environments. With a single site it’s a trivial thing - all you need is some dns records pointing to your load balancers and the rest is handled by Kubernetes.&lt;br /&gt;
For multi-cloud there are some caveats. First, you should avoid cloud-specific configurations such as Ingress that leverage features of particular cloud providers. This is the point where they provide a really long and compelling list of features configurable in their specific way that makes it a perfect trap for vendor lock-in.&lt;br /&gt;
Second, and in my opinion the most challenging part, is connectivity between your clusters. It’s a very important decision you need to make on whether to treat your clusters independently or as a whole. If you decide to take the former your life would be much easier, and if you prefer the latter you will need to invest much time, but at the same time you’ll get a huge, distributed cluster.
For independent, not interconnected clusters you need to provide a method for routing external traffic to your clusters. To make things short and easy - choose AWS Route53, as this is the best method of load balancing traffic to the same application deployed on multiple clusters running in different clouds on different regions. It provides a healtcheck method that will cut off misbehaving clusters automatically. Currently I just don’t see a better way to handle this.&lt;br /&gt;
When creating a swarm-like configuration where applications communicate with each other you need a way to connect all clusters and fortunately there are a few methods you can use. I would consider either &lt;a href=&quot;https://www.consul.io/docs/connect&quot;&gt;HashiCorp Consul&lt;/a&gt; or &lt;a href=&quot;https://istio.io/latest/docs/setup/install/multicluster/&quot;&gt;Istio&lt;/a&gt;. They are both service meshes that use &lt;a href=&quot;https://www.envoyproxy.io/&quot;&gt;Envoy&lt;/a&gt; that is configured automatically to proxy traffic between all applications and provision special proxies at the edge of each cluster that interconnect them in a transparent way. If you also need to manage those clusters from a single place you may use &lt;a href=&quot;https://github.com/kubernetes-sigs/kubefed&quot;&gt;kubefed&lt;/a&gt; project - they should release the beta version soon.&lt;/p&gt;

&lt;h2 id=&quot;challenge-3---differences-in-kubernetes-services&quot;&gt;Challenge 3 - differences in Kubernetes services&lt;/h2&gt;

&lt;p&gt;Although Kubernetes is all about abstraction layers and portability, cloud providers are aware of its growing popularity and they want not only to attract more users to their cloud services, but they also want them to stay there for long periods of time. You can see many ways they use to discourage you from migrating or using other cloud providers. That’s why there are few differences between those Kubernetes services. This results from the fact that Kubernetes service gives control to cloud provider over all Kubernetes parameters and there are &lt;a href=&quot;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/&quot;&gt;plenty of them&lt;/a&gt; alongside with &lt;a href=&quot;https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/&quot;&gt;feature gates&lt;/a&gt; that modify the behaviour of the clusters that one can.
One of the most evident examples is AWS EKS authentication. It’s the Kubernetes cloud service where you cannot use &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certs&quot;&gt;certificates&lt;/a&gt; to authenticate and most likely you will end up with some IAM accounts or service accounts’ tokens. Many features greatly enhances performance such as &lt;a href=&quot;https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing&quot;&gt;Network Endpoint Group&lt;/a&gt; available on Google GKE and it’s just hard to switch to other, often worse solution. The same applies to availability of Istio, different logging and monitoring approach. These are hard choices you have to make if you really want to leverage the portability feature and the final solution to that challenge is to provision your cluster manually using IaaS services. This approach applies to OpenShift/OKD or any other installer that provisions control plane with configurable masters.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;Multi-cloud approach is hard. Even with Kubernetes it can be sometimes easier to accept the fact that keeping this either possibility of using multiple cloud providers or actually implementing it requires a lot more effort than sticking with one. Those who will choose the harder path will leverage almost utopian vision of interconnected swarm of containers running on different IaaS implementations, different regions and maybe even completely different ways they are operated underneath those all abstraction layers that Kubernetes create.&lt;br /&gt;
Even if multi-cloud concept is something abstract and for some even unnecessary, then hybrid solutions are often a must for many organizations and rules that will allow them to build such solutions are the same. I expect this to be a hot topic for upcoming months and years and I can’t wait to see and help to implement more of those setups. After all with Kubernetes it has never been easier.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="kubernetes" /><category term="containers" /><category term="cloud" /><category term="multicloud" /><category term="hybrid" /><summary type="html">When this all IT revolution began, we started with one computer that was the size of a room, then we invented server rooms, we started dividing servers into virtual machines, but apparently it wasn’t good enough. Then the cloud revolution came and it has been a game changer since then. With cloud computing we got self-service through API calls that enable us to create various resources in different parts of the world. What an excellent and convenient solution! Why would anyone want more? It turns out that there are a couple of reasons why one would want to move to another level - multi-cloud. Reasons for going multi-cloud There are many reasons and I chose the most important ones that have the biggest impact during the decision making process. Costs Although most popular cloud services are comparable when it comes to costs, there could be slight differences between cloud providers, especially when considering geographical placement. And besides, on a larger scale a few percent cheaper virtual machines could save thousands of dollars in total, which is something worth considering. When using multiple cloud providers you can also leverage the fact that you are able to move your workload to a competitor and negotiate better terms of your contract. Vendor lock-in avoidance Many companies tend to avoid a vendor lock-in situation and even if they have their preferred cloud provider, it is a matter of the multi-cloud policy that assures the portability of your services to the other cloud provider. This also has a positive side effect - by using multiple cloud providers you can compare which of them are better in terms of not only cost, but also when it comes to reliability (SLAs), stability, speed, and other factors your organization finds important. Broad SaaS offering When providing a product in a SaaS model, it is mandatory to have an offering that is available on all major cloud platforms. It is crucial, especially if you offer it by providing direct access from your customers’ environments by linking their networks with a dedicated environment where you put your software. This is a fairly popular model used by companies providing services which are sensitive to latencies such as databases. Higher availability No cloud provider can give you 100% availability and how many “nines” they can offer in their SLAs depends on the service type and sometimes its tier. So to increase availability of your service you may need to leverage not only multiple regions, but also multiple cloud providers. Sometimes you may need to use a particular geographic region and an outage of a crucial service can affect your services which can be avoided by spreading them out into different cloud providers since they place their regions in similar locations. Leverage unique services Not every cloud provider offers the same set of services and this is the reason why you may want to use this particular service even if you’re running most of your applications on a single cloud provider. This will require not only setting up the secondary provider, learn how to manage it, but will also open up new opportunities to test and compare other services since you’ve come such a long way. Kubernetes as a multi-cloud service Now I will focus on a small, but very crucial part of cloud services - Kubernetes services with all the services and challenges behind it. It is important especially for organizations which run their software or the software they are provided by external vendors. Nowadays it’s not only software that is eating the world, but also containers have started to taking a huge bite of it. When one is enough There are cases where multi-cloud can bring a lot of benefits, but on the other hand there are dozens where it just makes no or little sense. It is especially valid when using Kubernetes which is designed for distributed systems and can leverage underlying cloud infrastructure to provide the following features: easy scalability of applications running as containers high availability and resilience thanks to the scheduler and its features (i.e. pod affinity and anti-affinity rules, node affinity rules and automatic distribution among different availability zones) geographical distribution using multiple clusters and service mesh connecting them together (additionally there will be a centralized control plane for those distributed clusters - kubefed) If only that solved all of the challenges that are not that easily addressed by Kubernetes itself, but might be solved with other ways. Challenge 1 - storage This is probably the biggest challenge of all - how to provide a consistent storage used by these distributed systems which can work not only in various regions but also in different cloud providers? Here’s the simplest solution - just don’t. If you can skip this part and design your systems in a way that would not require storage synchronization then you’ll save yourself a lot of time and potential problems that will eventually come up. In case you really need to have everything in-sync between all sites and regions think it through once again. Then, if you really, I mean really, really want it, then consider choosing a storage that would implement synchronization between multiple sites. One of the most interesting projects for distributed databases is Vitess based on a solution used by YouTube and is compatible with standard SQL databases. It is based on MySQL so if that’s fine for you then you can start experimenting with it to create a multi-region and multi-cloud solution that will span across multiple sites (e.g. multiple Kubernetes clusters). For Kubernetes there is even an operator that makes it quite easy to set up. Cassandra is an alternative to the Vitess which is more mature, but also not MySQL-compatible and requires you to design your app specifically for that type of database. There are a couple of operators as well - this one and this provided by Datastax. It is also worth to mention projects such as Rook or OpenEBS which can provide a low-level solution on which it is possible to build something more universal. Challenge 2 - networking Placing your applications in multiple regions and cloud providers could in fact be quite easy with Kubernetes and the next step is to put some traffic to these environments. With a single site it’s a trivial thing - all you need is some dns records pointing to your load balancers and the rest is handled by Kubernetes. For multi-cloud there are some caveats. First, you should avoid cloud-specific configurations such as Ingress that leverage features of particular cloud providers. This is the point where they provide a really long and compelling list of features configurable in their specific way that makes it a perfect trap for vendor lock-in. Second, and in my opinion the most challenging part, is connectivity between your clusters. It’s a very important decision you need to make on whether to treat your clusters independently or as a whole. If you decide to take the former your life would be much easier, and if you prefer the latter you will need to invest much time, but at the same time you’ll get a huge, distributed cluster. For independent, not interconnected clusters you need to provide a method for routing external traffic to your clusters. To make things short and easy - choose AWS Route53, as this is the best method of load balancing traffic to the same application deployed on multiple clusters running in different clouds on different regions. It provides a healtcheck method that will cut off misbehaving clusters automatically. Currently I just don’t see a better way to handle this. When creating a swarm-like configuration where applications communicate with each other you need a way to connect all clusters and fortunately there are a few methods you can use. I would consider either HashiCorp Consul or Istio. They are both service meshes that use Envoy that is configured automatically to proxy traffic between all applications and provision special proxies at the edge of each cluster that interconnect them in a transparent way. If you also need to manage those clusters from a single place you may use kubefed project - they should release the beta version soon. Challenge 3 - differences in Kubernetes services Although Kubernetes is all about abstraction layers and portability, cloud providers are aware of its growing popularity and they want not only to attract more users to their cloud services, but they also want them to stay there for long periods of time. You can see many ways they use to discourage you from migrating or using other cloud providers. That’s why there are few differences between those Kubernetes services. This results from the fact that Kubernetes service gives control to cloud provider over all Kubernetes parameters and there are plenty of them alongside with feature gates that modify the behaviour of the clusters that one can. One of the most evident examples is AWS EKS authentication. It’s the Kubernetes cloud service where you cannot use certificates to authenticate and most likely you will end up with some IAM accounts or service accounts’ tokens. Many features greatly enhances performance such as Network Endpoint Group available on Google GKE and it’s just hard to switch to other, often worse solution. The same applies to availability of Istio, different logging and monitoring approach. These are hard choices you have to make if you really want to leverage the portability feature and the final solution to that challenge is to provision your cluster manually using IaaS services. This approach applies to OpenShift/OKD or any other installer that provisions control plane with configurable masters. Conclusion Multi-cloud approach is hard. Even with Kubernetes it can be sometimes easier to accept the fact that keeping this either possibility of using multiple cloud providers or actually implementing it requires a lot more effort than sticking with one. Those who will choose the harder path will leverage almost utopian vision of interconnected swarm of containers running on different IaaS implementations, different regions and maybe even completely different ways they are operated underneath those all abstraction layers that Kubernetes create. Even if multi-cloud concept is something abstract and for some even unnecessary, then hybrid solutions are often a must for many organizations and rules that will allow them to build such solutions are the same. I expect this to be a hot topic for upcoming months and years and I can’t wait to see and help to implement more of those setups. After all with Kubernetes it has never been easier.</summary></entry><entry><title type="html">4 ways to manage Kubernetes resources</title><link href="https://blog.cloudowski.com/articles/4-ways-to-manage-kubernetes-resources/" rel="alternate" type="text/html" title="4 ways  to manage Kubernetes resources" /><published>2020-03-15T00:00:00+01:00</published><updated>2020-03-15T00:00:00+01:00</updated><id>https://blog.cloudowski.com/articles/4-ways-to-manage-kubernetes-resources</id><content type="html" xml:base="https://blog.cloudowski.com/articles/4-ways-to-manage-kubernetes-resources/">&lt;h1 id=&quot;kubectl-is-the-new-ssh&quot;&gt;Kubectl is the new ssh&lt;/h1&gt;

&lt;p&gt;When I started my adventure with linux systems the first tool I had to get to know was ssh. Oh man, what a wonderful and powerful piece of software it is! You can not only log in to your servers, copy files, but also create &lt;a href=&quot;https://help.ubuntu.com/community/SSH_VPN&quot;&gt;vpns&lt;/a&gt;, omit firewalls with SOCKS proxy and port-forwarding rules, and many more. With Kubernetes, however, this tool is used mostly for node maintenance provided that you still need to manage them and you haven’t switched to CoreOS or another variant of the immutable node type. For any other cases, you use &lt;em&gt;kubectl&lt;/em&gt; which is the new ssh. If you don’t use API calls directly then you probably use it in some form and you feed it with plenty of yaml files. Let’s face it - this is how managing Kubernetes environment looks like nowadays. You create those beautiful, lengthy text files with the definitions of the resources you wish to be created by Kubernetes and then magic happens and you’re the hero of the day. Unless you want to create not one but tens or hundreds of them with different configurations. And that’s when things get complicated.&lt;/p&gt;

&lt;h1 id=&quot;simplicity-vs-flexibility&quot;&gt;Simplicity vs. flexibility&lt;/h1&gt;

&lt;p&gt;For basic scenarios, simple yaml files can be sufficient. However, with the growth of your environment, the number of resources and configurations grows. You may start noticing how much more time it takes to create a new instance of your app, reconfigure the ones that are running already or share it with the community or with your customers wishing to customize it to their needs. 
Currently, I find the following ways to be the most commonly used:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Plain yaml files&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://kustomize.io&quot;&gt;Kustomize&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://helm.sh&quot;&gt;Helm Charts&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Operators&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They all can be used to manage your resources and they also are different in many ways. One of the distinguishing factors is complexity which also implies much effort to learn, use and maintain a particular method. On the other hand, it might pay off in the long run when you really want to create complex configurations. You can observe this relationship in the following diagram:&lt;/p&gt;

&lt;figure class=&quot;align-center&quot;&gt;
  &lt;img src=&quot;/assets/images/k8s-4-tools-to-manage.png&quot; alt=&quot;&quot; /&gt;
  
    &lt;figcaption&gt;
      Flexibility vs. Complexity

    &lt;/figcaption&gt;&lt;/figure&gt;

&lt;p&gt;So there’s a trade-off between how much flexibility you want to have versus how simple it can be. For some simplicity can win and for some, it’s just not enough. Let’s have a closer look at these four ways and see in which cases they can fit best.&lt;/p&gt;

&lt;h2 id=&quot;1-keep-it-simple-with-plain-yamls&quot;&gt;1. Keep it simple with plain yamls&lt;/h2&gt;

&lt;p&gt;I’ve always told people attending my courses that by learning Kubernetes they become yaml programmers. It might sound silly, but in reality, the basic usage of Kubernetes comes down to writing definitions of some objects in plain yaml. Of course, you have to know two things - the first is what you want to create, and the second is the knowledge on Kubernetes API which is the foundations of these yaml files.&lt;br /&gt;
After you’ve learned how to write yaml files you can just use &lt;code class=&quot;highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; to send it to Kubernetes and your job is done. No parameters, no templates, not figuring out how to change it in a fancy way. If you want to create an additional instance of your application or the whole environment you just copy and paste. Of course, there will be some duplication here but it’s the price you pay for simplicity. And besides, for a couple of instances it’s not a big deal and most of the organizations probably can live with this imperfect solution, at least at the beginning of their journey when they are not as big as they wish to be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;For projects with less than 4 configurations/instances of their apps or environments&lt;/li&gt;
  &lt;li&gt;For small startups&lt;/li&gt;
  &lt;li&gt;For bigger companies starting their first Kubernetes projects (e.g. as a part of PoC)&lt;/li&gt;
  &lt;li&gt;For individuals learning Kubernetes API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to avoid:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;organizations and projects releasing their products or services for Kubernetes environments&lt;/li&gt;
  &lt;li&gt;in projects where each instance varies significantly and requires a lot of adjustments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;2-customize-a-bit-with-kustomize&quot;&gt;2. Customize a bit with Kustomize&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://kustomize.io&quot;&gt;Kustomize&lt;/a&gt; is a project that is one of Kubernetes official SIG groups. It has the concept of inheritance based Kubernetes resources defined in.. yaml files. That’s right - you cannot escape from them! This time, however, with Kustomize you can apply any changes you want to your already existing set of resources. To put it simply &lt;strong&gt;Kustomize can be treated as a Kubernetes-specific patch tool&lt;/strong&gt;. It lets you override all the parts of yaml files with additional features, including the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Changing repositories, names,  and tags for container images&lt;/li&gt;
  &lt;li&gt;Generating ConfigMap objects directly from files and generate hashes ensuring that Deployment will trigger a new rollout when they change&lt;/li&gt;
  &lt;li&gt;Using kustomize cli to modify configurations on the fly (useful in CI/CD pipelines)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From version 1.14 it is built-in to kubectl binary which makes it easy to start with. Unfortunately, new features are added much faster in standalone kustomize project and its release cycle doesn’t sync up with the official releases of kubectl binaries. Thus, I highly recommend using its standalone version rather than kubectl’s built-in functionality.&lt;br /&gt;
According to its creators, it encourages you to use Kubernetes API directly without creating another artificial abstraction layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;For projects with less than 10 configurations/instances that don’t require too many parameters&lt;/li&gt;
  &lt;li&gt;For startups starting to grow, but still using Kubernetes internally (i.e. without the need to publish manifests as a part of their products)&lt;/li&gt;
  &lt;li&gt;For anyone who knows Kubernetes API and feels comfortable with using it directly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to avoid:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If your environments or instances vary up to between 30-50%, because you’ll just rewrite most of your manifests by adding patches&lt;/li&gt;
  &lt;li&gt;In the same cases as with plain yamls&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;3-powerful-helm-charts-for-advanced&quot;&gt;3. Powerful Helm Charts for advanced&lt;/h2&gt;

&lt;p&gt;If you haven’t seen &lt;a href=&quot;https://hub.helm.sh/&quot;&gt;Helm Hub&lt;/a&gt; then I recommend you to do it and look for your favorite software, especially if it’s a popular open-source project, and I’m pretty sure it’s there. With the release of Helm 3 most of its flaws have been fixed. Actually the biggest one was the Tiller component that is no longer required which makes it really great tool for your deployments. For OpenShift users that could also be a great relief since its templating system is just too simple (I’m trying to avoid word &lt;em&gt;terrible&lt;/em&gt; but it is).&lt;br /&gt;
Most people who have started using Helm for deploying these ready services often start writing their own Charts for applications and almost everything they deploy on Kubernetes. It might be a good idea for really complex configurations but in most cases, it’s just overkill. In cases when you don’t publish your Charts to some registry (and soon even to &lt;a href=&quot;https://helm.sh/docs/topics/registries/&quot;&gt;container registries&lt;/a&gt;) and just use them for their templating feature (with Helm 3 it is finally possible without downloading Chart’s source code), you might be better of with Kustomize.&lt;br /&gt;
For advanced scenarios, however, Helm is the way to go. It can be this single tool that you use to release your applications for other teams to deploy to their environments. And so can your customers who can use a single command - literally just &lt;code class=&quot;highlighter-rouge&quot;&gt;helm upgrade YOURCHART&lt;/code&gt; - to deploy a newer version of your app. All  you need to do in order to achieve this simplicity is “&lt;em&gt;just&lt;/em&gt;”:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;write Chart templates in a way that would handle all these cases and configuration variants&lt;/li&gt;
  &lt;li&gt;create and maintain the whole release process with CI/CD pipeline, testing, and publishing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many examples on Helm Hub shows how complex software can be packed in a Chart to make installation a trivial process and customization much more accessible, especially for end-users who don’t want to get into much details. I myself use many Helm Charts to install software and consider it as one of the most important projects in Kubernetes ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;For big projects with more than 10 configurations/instances that have many variants and parameters&lt;/li&gt;
  &lt;li&gt;For projects that are published on the Internet to make them easy to install&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to avoid:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If your applications are not that complex and you don’t need to publish them anywhere&lt;/li&gt;
  &lt;li&gt;If you don’t plan to maintain CI/CD for the release process cause maintaining Charts without pipelines is just time-consuming&lt;/li&gt;
  &lt;li&gt;If you don’t know Kubernetes API in-depth yet&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;4-automated-bots-operators-at-your-service&quot;&gt;4. Automated bots (operators) at your service&lt;/h2&gt;

&lt;p&gt;Now, the final one, most sophisticated, and for some superfluous. In fact, it’s a design pattern proposed by CoreOS (now Red Hat) that just leverages Kubernetes features like Custom Resource Definition and custom logic embedded in software running directly on Kubernetes and leveraging its internal API called controllers. It is widely used in the OpenShift ecosystem and it’s been promoted by Red Hat since the release of OpenShift 4, as the best way to create services on OpenShift. They even provide an operator for customizing OpenShift’s web interface. That’s what I call an abstraction layer! Everything is controlled there with yaml handled by dozens of custom operators, because the whole logic is embedded there.&lt;br /&gt;
To put it simply what is operator I would say that &lt;strong&gt;operator is an equivalent of cloud service&lt;/strong&gt; like Amazon RDS, GCP Cloud Pub/Sub or Azure Cosmos DB. You build an operator to provide a consistent, simple way to install and maintain (including upgrades) your application in &lt;em&gt;”-as-a-Service”&lt;/em&gt; way on any Kubernetes platform using its native API. It does not only provide the highest level of automation, but also allows for including complex logic such as built-in monitoring, seamless upgrades, self-healing and autoscaling. Once again - all you need to do is provide a definition in yaml format and the rest will be taken care of by the operator.&lt;br /&gt;
“It looks awesome!” one can say. Many think it should and will be a preferred way of delivering applications. I cannot agree with that statement. I think if  you’re a software vendor providing your application to hundreds of customers (even internally) then this is the way to go. Otherwise, it can be too complex and time consuming to write operators. Especially if you want to follow best practices, use Golang and provide an easy upgrade path (and it can get tricky).&lt;/p&gt;

&lt;p&gt;I found the following projects to be very helpful in developing and maintaining Operators:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/kubernetes-sigs/kubebuilder&quot;&gt;kubebuilder&lt;/a&gt; - one of the first operator frameworks for Go developers, the most poweful and the most complex one&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/zalando-incubator/kopf&quot;&gt;kopf&lt;/a&gt; - framework for developing operators in python
&lt;a href=&quot;https://kudo.dev/&quot;&gt;KUDO&lt;/a&gt; - write operators in a declarative way&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/operator-framework/operator-sdk&quot;&gt;operator-sdk&lt;/a&gt; - framework from &lt;del&gt;CoreOS&lt;/del&gt;Red Hat for writing operators in Go and Ansible&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/operator-framework/operator-lifecycle-manager&quot;&gt;operator-lifecycle&lt;/a&gt; - a must have for anyone interested in getting serious with operators and their lifrecycle (installation, maintenance, upgrades)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you need to create your own service (e.g. &lt;em&gt;YourProduct-as-a-Service&lt;/em&gt;) available on Kubernetes&lt;/li&gt;
  &lt;li&gt;If you plan to add additional features to your service (e.g. monitoring, autoscaling, autohealing, analytics)&lt;/li&gt;
  &lt;li&gt;If you’re a software vendor providing your software for Kubernetes platforms&lt;/li&gt;
  &lt;li&gt;If you want to develop software installed on OpenShift and be a part of its ecosystem (e.g. publish your software on their &lt;em&gt;”app marketplace”&lt;/em&gt; - &lt;a href=&quot;https://operatorhub.io/&quot;&gt;operatorhub.io&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to avoid:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;For simple applications&lt;/li&gt;
  &lt;li&gt;For other applications when Helm Chart with some semi-complex templates will do&lt;/li&gt;
  &lt;li&gt;When no extra automation is needed or it can be acomplished with simple configuration of the existing components&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;Each of these methods and tools I have described are for organizations at different point of their journey with Kubernetes. For standard use-cases simple yamls may be sufficient and with more applications Kustomize can be great enhancement of this approach. When things get serious and applications get more complex, Helm Chart presents a perfect balance between complexity and flexibility. I can recommend Operators for vendors delivering their applications in Kubernetes in a similar way to cloud services, and definitely for those who plan to provide it for enterprise customers using OpenShift.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="kubernetes" /><category term="openshift" /><category term="containers" /><category term="operator" /><category term="kustomize" /><summary type="html">Kubectl is the new ssh When I started my adventure with linux systems the first tool I had to get to know was ssh. Oh man, what a wonderful and powerful piece of software it is! You can not only log in to your servers, copy files, but also create vpns, omit firewalls with SOCKS proxy and port-forwarding rules, and many more. With Kubernetes, however, this tool is used mostly for node maintenance provided that you still need to manage them and you haven’t switched to CoreOS or another variant of the immutable node type. For any other cases, you use kubectl which is the new ssh. If you don’t use API calls directly then you probably use it in some form and you feed it with plenty of yaml files. Let’s face it - this is how managing Kubernetes environment looks like nowadays. You create those beautiful, lengthy text files with the definitions of the resources you wish to be created by Kubernetes and then magic happens and you’re the hero of the day. Unless you want to create not one but tens or hundreds of them with different configurations. And that’s when things get complicated. Simplicity vs. flexibility For basic scenarios, simple yaml files can be sufficient. However, with the growth of your environment, the number of resources and configurations grows. You may start noticing how much more time it takes to create a new instance of your app, reconfigure the ones that are running already or share it with the community or with your customers wishing to customize it to their needs. Currently, I find the following ways to be the most commonly used: Plain yaml files Kustomize Helm Charts Operators They all can be used to manage your resources and they also are different in many ways. One of the distinguishing factors is complexity which also implies much effort to learn, use and maintain a particular method. On the other hand, it might pay off in the long run when you really want to create complex configurations. You can observe this relationship in the following diagram: Flexibility vs. Complexity So there’s a trade-off between how much flexibility you want to have versus how simple it can be. For some simplicity can win and for some, it’s just not enough. Let’s have a closer look at these four ways and see in which cases they can fit best. 1. Keep it simple with plain yamls I’ve always told people attending my courses that by learning Kubernetes they become yaml programmers. It might sound silly, but in reality, the basic usage of Kubernetes comes down to writing definitions of some objects in plain yaml. Of course, you have to know two things - the first is what you want to create, and the second is the knowledge on Kubernetes API which is the foundations of these yaml files. After you’ve learned how to write yaml files you can just use kubectl to send it to Kubernetes and your job is done. No parameters, no templates, not figuring out how to change it in a fancy way. If you want to create an additional instance of your application or the whole environment you just copy and paste. Of course, there will be some duplication here but it’s the price you pay for simplicity. And besides, for a couple of instances it’s not a big deal and most of the organizations probably can live with this imperfect solution, at least at the beginning of their journey when they are not as big as they wish to be. When to use: For projects with less than 4 configurations/instances of their apps or environments For small startups For bigger companies starting their first Kubernetes projects (e.g. as a part of PoC) For individuals learning Kubernetes API When to avoid: organizations and projects releasing their products or services for Kubernetes environments in projects where each instance varies significantly and requires a lot of adjustments 2. Customize a bit with Kustomize Kustomize is a project that is one of Kubernetes official SIG groups. It has the concept of inheritance based Kubernetes resources defined in.. yaml files. That’s right - you cannot escape from them! This time, however, with Kustomize you can apply any changes you want to your already existing set of resources. To put it simply Kustomize can be treated as a Kubernetes-specific patch tool. It lets you override all the parts of yaml files with additional features, including the following: Changing repositories, names, and tags for container images Generating ConfigMap objects directly from files and generate hashes ensuring that Deployment will trigger a new rollout when they change Using kustomize cli to modify configurations on the fly (useful in CI/CD pipelines) From version 1.14 it is built-in to kubectl binary which makes it easy to start with. Unfortunately, new features are added much faster in standalone kustomize project and its release cycle doesn’t sync up with the official releases of kubectl binaries. Thus, I highly recommend using its standalone version rather than kubectl’s built-in functionality. According to its creators, it encourages you to use Kubernetes API directly without creating another artificial abstraction layer. When to use: For projects with less than 10 configurations/instances that don’t require too many parameters For startups starting to grow, but still using Kubernetes internally (i.e. without the need to publish manifests as a part of their products) For anyone who knows Kubernetes API and feels comfortable with using it directly When to avoid: If your environments or instances vary up to between 30-50%, because you’ll just rewrite most of your manifests by adding patches In the same cases as with plain yamls 3. Powerful Helm Charts for advanced If you haven’t seen Helm Hub then I recommend you to do it and look for your favorite software, especially if it’s a popular open-source project, and I’m pretty sure it’s there. With the release of Helm 3 most of its flaws have been fixed. Actually the biggest one was the Tiller component that is no longer required which makes it really great tool for your deployments. For OpenShift users that could also be a great relief since its templating system is just too simple (I’m trying to avoid word terrible but it is). Most people who have started using Helm for deploying these ready services often start writing their own Charts for applications and almost everything they deploy on Kubernetes. It might be a good idea for really complex configurations but in most cases, it’s just overkill. In cases when you don’t publish your Charts to some registry (and soon even to container registries) and just use them for their templating feature (with Helm 3 it is finally possible without downloading Chart’s source code), you might be better of with Kustomize. For advanced scenarios, however, Helm is the way to go. It can be this single tool that you use to release your applications for other teams to deploy to their environments. And so can your customers who can use a single command - literally just helm upgrade YOURCHART - to deploy a newer version of your app. All you need to do in order to achieve this simplicity is “just”: write Chart templates in a way that would handle all these cases and configuration variants create and maintain the whole release process with CI/CD pipeline, testing, and publishing Many examples on Helm Hub shows how complex software can be packed in a Chart to make installation a trivial process and customization much more accessible, especially for end-users who don’t want to get into much details. I myself use many Helm Charts to install software and consider it as one of the most important projects in Kubernetes ecosystem. When to use: For big projects with more than 10 configurations/instances that have many variants and parameters For projects that are published on the Internet to make them easy to install When to avoid: If your applications are not that complex and you don’t need to publish them anywhere If you don’t plan to maintain CI/CD for the release process cause maintaining Charts without pipelines is just time-consuming If you don’t know Kubernetes API in-depth yet 4. Automated bots (operators) at your service Now, the final one, most sophisticated, and for some superfluous. In fact, it’s a design pattern proposed by CoreOS (now Red Hat) that just leverages Kubernetes features like Custom Resource Definition and custom logic embedded in software running directly on Kubernetes and leveraging its internal API called controllers. It is widely used in the OpenShift ecosystem and it’s been promoted by Red Hat since the release of OpenShift 4, as the best way to create services on OpenShift. They even provide an operator for customizing OpenShift’s web interface. That’s what I call an abstraction layer! Everything is controlled there with yaml handled by dozens of custom operators, because the whole logic is embedded there. To put it simply what is operator I would say that operator is an equivalent of cloud service like Amazon RDS, GCP Cloud Pub/Sub or Azure Cosmos DB. You build an operator to provide a consistent, simple way to install and maintain (including upgrades) your application in ”-as-a-Service” way on any Kubernetes platform using its native API. It does not only provide the highest level of automation, but also allows for including complex logic such as built-in monitoring, seamless upgrades, self-healing and autoscaling. Once again - all you need to do is provide a definition in yaml format and the rest will be taken care of by the operator. “It looks awesome!” one can say. Many think it should and will be a preferred way of delivering applications. I cannot agree with that statement. I think if you’re a software vendor providing your application to hundreds of customers (even internally) then this is the way to go. Otherwise, it can be too complex and time consuming to write operators. Especially if you want to follow best practices, use Golang and provide an easy upgrade path (and it can get tricky). I found the following projects to be very helpful in developing and maintaining Operators: kubebuilder - one of the first operator frameworks for Go developers, the most poweful and the most complex one kopf - framework for developing operators in python KUDO - write operators in a declarative way operator-sdk - framework from CoreOSRed Hat for writing operators in Go and Ansible operator-lifecycle - a must have for anyone interested in getting serious with operators and their lifrecycle (installation, maintenance, upgrades) When to use: If you need to create your own service (e.g. YourProduct-as-a-Service) available on Kubernetes If you plan to add additional features to your service (e.g. monitoring, autoscaling, autohealing, analytics) If you’re a software vendor providing your software for Kubernetes platforms If you want to develop software installed on OpenShift and be a part of its ecosystem (e.g. publish your software on their ”app marketplace” - operatorhub.io) When to avoid: For simple applications For other applications when Helm Chart with some semi-complex templates will do When no extra automation is needed or it can be acomplished with simple configuration of the existing components Conclusion Each of these methods and tools I have described are for organizations at different point of their journey with Kubernetes. For standard use-cases simple yamls may be sufficient and with more applications Kustomize can be great enhancement of this approach. When things get serious and applications get more complex, Helm Chart presents a perfect balance between complexity and flexibility. I can recommend Operators for vendors delivering their applications in Kubernetes in a similar way to cloud services, and definitely for those who plan to provide it for enterprise customers using OpenShift.</summary></entry><entry><title type="html">Why Vault and Kubernetes is the perfect couple</title><link href="https://blog.cloudowski.com/articles/why-vault-and-kubernetes-is-the-perfect-couple/" rel="alternate" type="text/html" title="Why Vault and Kubernetes is the perfect couple" /><published>2020-02-22T00:00:00+01:00</published><updated>2020-02-22T00:00:00+01:00</updated><id>https://blog.cloudowski.com/articles/why-vault-and-kubernetes-is-the-perfect-couple</id><content type="html" xml:base="https://blog.cloudowski.com/articles/why-vault-and-kubernetes-is-the-perfect-couple/">&lt;h2 id=&quot;the-not-so-secret-flaws-of-kubernetes-secrets&quot;&gt;The (not so) secret flaws of Kubernetes Secrets&lt;/h2&gt;

&lt;p&gt;When you’re starting learning and using Kubernetes for the first time you discover that there is this special object called &lt;em&gt;Secret&lt;/em&gt; that is designed for storing various kinds of confidential data. However, when you find out it is very similar to &lt;em&gt;ConfigMap&lt;/em&gt; object and is &lt;strong&gt;not encrypted&lt;/strong&gt; (it can be optionally encrypted at rest) you may start wondering - is it really secure? Especially when you use the same API to interact with it and the same credentials. This, combined with a rather simple RBAC model, can create many potential risks. Most people would stick with one of three default roles for regular users - &lt;em&gt;view, edit&lt;/em&gt;, and &lt;em&gt;admin&lt;/em&gt; - with &lt;em&gt;view&lt;/em&gt; as the only one that forbids viewing Secret objects. You need to be very careful when assigning roles to users or deciding to create your custom RBAC roles. But again, this is also not that easy since RBAC rules can only whitelist API requests - it is not possible to create exceptions (i.e. create blacklists) without using the external mechanism such as Open Policy Agent.&lt;/p&gt;

&lt;h2 id=&quot;managing-secrets-is-hard&quot;&gt;Managing Secrets is Hard&lt;/h2&gt;

&lt;p&gt;On top of that managing Secret object definitions (e.g. yaml files) is not an easy task. Where should you store it before sending it to your Kubernetes cluster - in a git repo? Outside of it? Who should have access to view and modify it? What about encryption - should it be encrypted with a single key shared by the trusted team members or with gpg (e.g. &lt;a href=&quot;https://git-secret.io/&quot;&gt;git-secret&lt;/a&gt;, &lt;a href=&quot;https://github.com/AGWA/git-crypt&quot;&gt;git-crypt&lt;/a&gt;)? 
One thing is for sure - it is hard to maintain Secret object definitions in the same way as other Kubernetes objects. You can try to come up with your own way of protecting them, auditing changes and other important things you’re not even aware of, but why reinvent the wheel when there’s something better? Much, MUCH better.&lt;/p&gt;

&lt;h2 id=&quot;hashicorp-vault-to-the-rescue&quot;&gt;HashiCorp Vault to the Rescue&lt;/h2&gt;

&lt;p&gt;Now some may say I am a HashiCorp fanboy which… might be partially true :) I not only love their products but more their approach towards managing infrastructure and the fact that most features they provide are available in open source versions. 
It is not a surprise that the best product they have on offer (in terms of commercial success) is Vault. It is a project designed to help you store and securely access your confidential data. It is designed for this purpose only and has many excellent features among which you will also find many that are specific for Kubernetes environments.&lt;/p&gt;

&lt;h2 id=&quot;best-features-of-vault&quot;&gt;Best features of Vault&lt;/h2&gt;

&lt;p&gt;I’m not going to list out all of the features - they are available in &lt;a href=&quot;https://www.vaultproject.io/docs/what-is-vault/&quot;&gt;the official documentation&lt;/a&gt;. Let me focus on the most important ones and the ones also related to Kubernetes.&lt;/p&gt;

&lt;h3 id=&quot;one-security-dedicated-service-enterprise-features&quot;&gt;One security-dedicated service enterprise features&lt;/h3&gt;

&lt;p&gt;The fact that it’s a central place where you store all your confidential data may be alarming at first, but Vault offers many interesting functionalities that should remove any doubts in its security capabilities. One of them is the concept of &lt;a href=&quot;https://www.vaultproject.io/docs/concepts/seal/&quot;&gt;unsealing&lt;/a&gt; Vault after start or restart. It is based on a &lt;a href=&quot;https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing&quot;&gt;Shamir’s Secret Sharing&lt;/a&gt; concept which requires the usage of multiple keys that should be owned and protected by different people. This definitely decreases the chance of interfering or tampering with stored data, as the whole process imposes transparency of such actions. 
Of course there’s &lt;a href=&quot;https://www.vaultproject.io/docs/audit/&quot;&gt;audit&lt;/a&gt;, &lt;a href=&quot;https://www.vaultproject.io/docs/concepts/ha/&quot;&gt;high availabity&lt;/a&gt; and access defined with well-documented &lt;a href=&quot;https://www.vaultproject.io/docs/concepts/policies/&quot;&gt;policies&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;ability-to-store-various-type-of-data&quot;&gt;Ability to store various type of data&lt;/h3&gt;

&lt;p&gt;The first thing that people want to store in places like Vault are passwords. This is probably because we use them most often. However, if you want to deploy Vault only for this purpose you should reconsider it, cause it’s much more powerful and it’s like driving a Ferrari using 1st gear only. Vault has many secret engines designed for different kind of data. The basic one - &lt;a href=&quot;https://www.vaultproject.io/docs/secrets/kv/&quot;&gt;KV (Key-Value)&lt;/a&gt; - can be used for store any arbitrary data with advanced versioning. It can also act as your &lt;a href=&quot;https://www.vaultproject.io/docs/secrets/pki/&quot;&gt;PKI&lt;/a&gt; or &lt;a href=&quot;https://www.vaultproject.io/docs/secrets/totp/&quot;&gt;Time-based One Time Passwords&lt;/a&gt;(similar to Google Authenticator). But that’s not all. In my opinion, the real power of Vault lies in dynamic secrets.&lt;/p&gt;

&lt;h3 id=&quot;forget-your-passwords-with-dynamic-secrets&quot;&gt;Forget your passwords with dynamic secrets&lt;/h3&gt;

&lt;p&gt;It’s my personal opinion and I think that many people will agree with me that dynamic secrets are &lt;strong&gt;the best feature&lt;/strong&gt; of Vault. If there was a single reason for me to invest my time and resources to implement Vault in my organization that would be it. Dynamic secrets change the way you handle authentication. Instead of configuring static passwords you let Vault create logins and passwords on the fly, on-demand, and also with limited usage time. I love the fact that Vault also rotates not only users passwords but also administrators as well, cause let’s be honest - how often do you change your password to your database and when the last time you did it? 
Vault can manage access to your services instead of only storing static credentials and this is a game-changer. It can manage access to &lt;a href=&quot;https://www.vaultproject.io/docs/secrets/databases/&quot;&gt;databases&lt;/a&gt;, cloud (&lt;a href=&quot;https://www.vaultproject.io/docs/secrets/aws/&quot;&gt;AWS&lt;/a&gt;,&lt;a href=&quot;https://www.vaultproject.io/docs/secrets/azure/&quot;&gt;Azure&lt;/a&gt;,&lt;a href=&quot;https://www.vaultproject.io/docs/secrets/gcp/&quot;&gt;Google&lt;/a&gt;), and many others.&lt;/p&gt;

&lt;h3 id=&quot;no-vendor-lock-in&quot;&gt;No vendor lock-in&lt;/h3&gt;

&lt;p&gt;There are various cloud services available that provide similar, however limited in features, functionality. Google recently announced their &lt;a href=&quot;https://cloud.google.com/blog/products/identity-security/introducing-google-clouds-secret-manager&quot;&gt;Secret Manager&lt;/a&gt;, AWS has &lt;a href=&quot;https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html&quot;&gt;Parameter Store&lt;/a&gt;, and Azure offers &lt;a href=&quot;https://azure.microsoft.com/en-us/services/key-vault/&quot;&gt;Key Vault&lt;/a&gt;. If you’re looking for a way to avoid vendor lock-in and keep your infrastructure portable, multi-cloud enabled and feature-rich then Vault will satisfy your needs. Let’s not forget about one more important thing - not every organization uses cloud and since Vault can be installed anywhere it also suits these environments perfectly.&lt;/p&gt;

&lt;h3 id=&quot;multiple-authentication-engines-with-excellent-kubernetes-support&quot;&gt;Multiple authentication engines with excellent Kubernetes support&lt;/h3&gt;

&lt;p&gt;In order to get access to credentials stored in Vault you need to authenticate yourself and you have plenty of authentication methods to choose from. You can use simple username and password, TLS certificates but also use your existing accounts from GitHub, LDAP, OIDC, most cloud providers and many others. These authentication engines can be used by people in your organization and also by your applications. However, when designing access for your systems, you may find other engines to be more suitable. &lt;a href=&quot;https://www.vaultproject.io/docs/auth/approle/&quot;&gt;AppRole&lt;/a&gt; is dedicated to those scenarios and it is a more generic method for any applications, regardless of the platform they run on. When you deploy your applications on Kubernetes you will be better off with native &lt;a href=&quot;https://www.vaultproject.io/docs/auth/kubernetes/&quot;&gt;Kubernetes support&lt;/a&gt;. It can be used directly by your application, your custom sidecar or &lt;a href=&quot;https://www.vaultproject.io/docs/agent/&quot;&gt;Vault Agent&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;native-kubernetes-installation&quot;&gt;Native Kubernetes installation&lt;/h3&gt;

&lt;p&gt;Since Vault is a dedicated solution for security, proper deployment can be somewhat cumbersome. Fortunately, there is a dedicated installation method for &lt;a href=&quot;https://www.vaultproject.io/docs/platform/k8s/&quot;&gt;Kubernetes&lt;/a&gt; that uses a Helm Chart provided and maintained by Vault’s authors (i.e. HashiCorp). 
Although I really like and appreciate that feature I would use it only for non-production environments to speed up the learning process. For production deployments, I would still use traditional virtual machines and automate it with Terraform modules - they are also provided by HashiCorp in &lt;a href=&quot;https://registry.terraform.io/&quot;&gt;Terraform registry&lt;/a&gt; (e.g. for &lt;a href=&quot;https://registry.terraform.io/modules/hashicorp/vault/google/0.2.0&quot;&gt;GCP&lt;/a&gt;).&lt;/p&gt;

&lt;h2 id=&quot;why-it-is-now-easier-than-ever&quot;&gt;Why it is now easier than ever&lt;/h2&gt;

&lt;p&gt;Until recently using Vault with Kubernetes required some additional, often complicated steps to provide secrets stored in Vault to an application running on Kubernetes. Even with Vault Agent you’re just simplifying only the token fetching part and leaving you with the rest of the logic i.e. retrieving credentials and making sure they are up to date. With additional component - &lt;a href=&quot;https://www.vaultproject.io/docs/platform/k8s/injector/&quot;&gt;Agent Sidecar Injector&lt;/a&gt; - the whole workflow is very simple now. After installing and configuring it (you do it once) any application can be provided with secrets from Vault in a totally transparent way. All you need to do is to add a few annotations to your Pod definitions such as these:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;vault.hashicorp.com/agent-inject&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;true&quot;&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;vault.hashicorp.com/agent-inject-secret-helloworld&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;secrets/helloworld&quot;&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;vault.hashicorp.com/role&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;myapp&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;No more writing custom scripts, placing them in a separate sidecar or init containers - everything is managed by Vault components designed to offload you from these tasks. It has really never been that easy! In fact, this combined with dynamic secrets described earlier, creates a fully passwordless solution. Access is managed by Vault and your (or your security team’s) job is to define which application should have access to particular services. That’s what I call a seamless and secure integration!&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I’ve always been a fan of HashiCorp products and at the same time I’ve considered Kubernetes Secrets as an imperfect solution for providing proper security for storing credentials. With the excellent support for Kubernetes in Vault now we finally have a missing link in a form of a dedicated service with proper auditing, modularity and ease of use. If you think seriously about securing your Kubernetes workloads, especially in an enterprise environment, then HashiCorp Vault is the best solution there is. Look no further and start implementing - you’ll thank me later.&lt;/p&gt;</content><author><name>Tomasz Cholewa</name><email>tomasz@cloudowski.com</email></author><category term="kubernetes" /><category term="openshift" /><category term="vault" /><category term="security" /><summary type="html">The (not so) secret flaws of Kubernetes Secrets When you’re starting learning and using Kubernetes for the first time you discover that there is this special object called Secret that is designed for storing various kinds of confidential data. However, when you find out it is very similar to ConfigMap object and is not encrypted (it can be optionally encrypted at rest) you may start wondering - is it really secure? Especially when you use the same API to interact with it and the same credentials. This, combined with a rather simple RBAC model, can create many potential risks. Most people would stick with one of three default roles for regular users - view, edit, and admin - with view as the only one that forbids viewing Secret objects. You need to be very careful when assigning roles to users or deciding to create your custom RBAC roles. But again, this is also not that easy since RBAC rules can only whitelist API requests - it is not possible to create exceptions (i.e. create blacklists) without using the external mechanism such as Open Policy Agent. Managing Secrets is Hard On top of that managing Secret object definitions (e.g. yaml files) is not an easy task. Where should you store it before sending it to your Kubernetes cluster - in a git repo? Outside of it? Who should have access to view and modify it? What about encryption - should it be encrypted with a single key shared by the trusted team members or with gpg (e.g. git-secret, git-crypt)? One thing is for sure - it is hard to maintain Secret object definitions in the same way as other Kubernetes objects. You can try to come up with your own way of protecting them, auditing changes and other important things you’re not even aware of, but why reinvent the wheel when there’s something better? Much, MUCH better. HashiCorp Vault to the Rescue Now some may say I am a HashiCorp fanboy which… might be partially true :) I not only love their products but more their approach towards managing infrastructure and the fact that most features they provide are available in open source versions. It is not a surprise that the best product they have on offer (in terms of commercial success) is Vault. It is a project designed to help you store and securely access your confidential data. It is designed for this purpose only and has many excellent features among which you will also find many that are specific for Kubernetes environments. Best features of Vault I’m not going to list out all of the features - they are available in the official documentation. Let me focus on the most important ones and the ones also related to Kubernetes. One security-dedicated service enterprise features The fact that it’s a central place where you store all your confidential data may be alarming at first, but Vault offers many interesting functionalities that should remove any doubts in its security capabilities. One of them is the concept of unsealing Vault after start or restart. It is based on a Shamir’s Secret Sharing concept which requires the usage of multiple keys that should be owned and protected by different people. This definitely decreases the chance of interfering or tampering with stored data, as the whole process imposes transparency of such actions. Of course there’s audit, high availabity and access defined with well-documented policies. Ability to store various type of data The first thing that people want to store in places like Vault are passwords. This is probably because we use them most often. However, if you want to deploy Vault only for this purpose you should reconsider it, cause it’s much more powerful and it’s like driving a Ferrari using 1st gear only. Vault has many secret engines designed for different kind of data. The basic one - KV (Key-Value) - can be used for store any arbitrary data with advanced versioning. It can also act as your PKI or Time-based One Time Passwords(similar to Google Authenticator). But that’s not all. In my opinion, the real power of Vault lies in dynamic secrets. Forget your passwords with dynamic secrets It’s my personal opinion and I think that many people will agree with me that dynamic secrets are the best feature of Vault. If there was a single reason for me to invest my time and resources to implement Vault in my organization that would be it. Dynamic secrets change the way you handle authentication. Instead of configuring static passwords you let Vault create logins and passwords on the fly, on-demand, and also with limited usage time. I love the fact that Vault also rotates not only users passwords but also administrators as well, cause let’s be honest - how often do you change your password to your database and when the last time you did it? Vault can manage access to your services instead of only storing static credentials and this is a game-changer. It can manage access to databases, cloud (AWS,Azure,Google), and many others. No vendor lock-in There are various cloud services available that provide similar, however limited in features, functionality. Google recently announced their Secret Manager, AWS has Parameter Store, and Azure offers Key Vault. If you’re looking for a way to avoid vendor lock-in and keep your infrastructure portable, multi-cloud enabled and feature-rich then Vault will satisfy your needs. Let’s not forget about one more important thing - not every organization uses cloud and since Vault can be installed anywhere it also suits these environments perfectly. Multiple authentication engines with excellent Kubernetes support In order to get access to credentials stored in Vault you need to authenticate yourself and you have plenty of authentication methods to choose from. You can use simple username and password, TLS certificates but also use your existing accounts from GitHub, LDAP, OIDC, most cloud providers and many others. These authentication engines can be used by people in your organization and also by your applications. However, when designing access for your systems, you may find other engines to be more suitable. AppRole is dedicated to those scenarios and it is a more generic method for any applications, regardless of the platform they run on. When you deploy your applications on Kubernetes you will be better off with native Kubernetes support. It can be used directly by your application, your custom sidecar or Vault Agent. Native Kubernetes installation Since Vault is a dedicated solution for security, proper deployment can be somewhat cumbersome. Fortunately, there is a dedicated installation method for Kubernetes that uses a Helm Chart provided and maintained by Vault’s authors (i.e. HashiCorp). Although I really like and appreciate that feature I would use it only for non-production environments to speed up the learning process. For production deployments, I would still use traditional virtual machines and automate it with Terraform modules - they are also provided by HashiCorp in Terraform registry (e.g. for GCP). Why it is now easier than ever Until recently using Vault with Kubernetes required some additional, often complicated steps to provide secrets stored in Vault to an application running on Kubernetes. Even with Vault Agent you’re just simplifying only the token fetching part and leaving you with the rest of the logic i.e. retrieving credentials and making sure they are up to date. With additional component - Agent Sidecar Injector - the whole workflow is very simple now. After installing and configuring it (you do it once) any application can be provided with secrets from Vault in a totally transparent way. All you need to do is to add a few annotations to your Pod definitions such as these: spec: template: metadata: annotations: vault.hashicorp.com/agent-inject: &quot;true&quot; vault.hashicorp.com/agent-inject-secret-helloworld: &quot;secrets/helloworld&quot; vault.hashicorp.com/role: &quot;myapp&quot; No more writing custom scripts, placing them in a separate sidecar or init containers - everything is managed by Vault components designed to offload you from these tasks. It has really never been that easy! In fact, this combined with dynamic secrets described earlier, creates a fully passwordless solution. Access is managed by Vault and your (or your security team’s) job is to define which application should have access to particular services. That’s what I call a seamless and secure integration! Conclusion I’ve always been a fan of HashiCorp products and at the same time I’ve considered Kubernetes Secrets as an imperfect solution for providing proper security for storing credentials. With the excellent support for Kubernetes in Vault now we finally have a missing link in a form of a dedicated service with proper auditing, modularity and ease of use. If you think seriously about securing your Kubernetes workloads, especially in an enterprise environment, then HashiCorp Vault is the best solution there is. Look no further and start implementing - you’ll thank me later.</summary></entry></feed>