Adam Kierat avatar
deploy.sh — portfolio
Available for new challenges

Adam Kierat

I build the infrastructure that never makes the headlines.
Tech Lead / Cloud Architecture / Platform Engineering

1 Platform Built from Zero
5 Years in Infrastructure
Full Stack Ownership
scroll

Building infrastructure
from the ground up

Adam Kierat

I'm an Infrastructure Tech Lead based in Gliwice, Poland, with a passion for turning complex infrastructure challenges into elegant, automated solutions.

At Beesafe, I architected the entire cloud platform from scratch and now lead all technical decisions across infrastructure, security, and deployment strategy. I don't just configure tools; I design systems that scale, self-heal, and let development teams ship faster.

My journey started with Linux systems administration, progressed through enterprise middleware at ING, and evolved into full-stack DevOps. I think in terms of terraform plan, dream in YAML, and measure success in uptime.

cat ~/.config/adam.yaml
name: Adam Kierat
role: Infrastructure Tech Lead
location: Gliwice, Poland
education:
  degree: BSc Computer Science
  university: Silesian University of Technology
languages:
  - Polish # native
  - English # C1
interests:
  - Cloud Architecture
  - Platform Engineering
  - Infrastructure Automation
status: operational # 99.9% uptime

Numbers That Matter
Quantified results from production

0 Microservices in Production Platform built from zero, now serving all company workloads
0 Cloud Cost Reduction Azure spend optimized through spot instances & right-sizing
0 Automated Deployments From weekly manual releases to continuous delivery
0 Platform Uptime High availability across all production environments
0 Mean Time to Recovery Incident response with automated runbooks & monitoring
0 Security Incidents Zero-trust architecture with WAF, Vault, and network policies

Credentials

Kubernetes
Terraform
Azure
Linux
GitOps

git log
--oneline --graph

a3f8c2d Sep 2022 — Present

Infrastructure Tech Lead @ Beesafe

📍 Warszawa HEAD

Promoted from DevOps Engineer to Tech Lead. Architected the entire cloud platform from zero and now lead all technical decisions across infrastructure, security, and deployment strategy for the platform serving 30+ production microservices.

Changes:
  • + Architected the entire cloud platform from zero — AKS clusters, GitOps pipelines, observability stack, secrets management serving 30+ microservices
  • + Led technical decision-making across infrastructure, security, and deployment strategy for all development teams
  • + Reduced Azure cloud spend by ~30% through spot instance strategies, reserved capacity, and resource right-sizing
  • + Designed zero-trust security architecture with Cloudflare WAF, network segmentation, and Vault — zero security incidents since implementation
  • + Increased deployment frequency from weekly manual releases to 15+ automated deploys per day via ArgoCD and Terraform
AzureKubernetesHelmArgoCDTerraformPrometheusGrafanaDockerGitHub Actions
7b2e1f9 May 2022 — Sep 2022

Junior DevOps Engineer @ ING Hubs Poland

📍 Katowice

Managed production, staging, and development environments across enterprise middleware. Led migration of customer applications from on-premise infrastructure to ING Private Cloud.

Changes:
  • + Migrated customer applications from on-premise to ING Private Cloud
  • + Built and maintained Azure Pipelines for CI/CD automation
  • + Managed enterprise middleware: IBM WebSphere, JBoss, WebLogic, Apache Tomcat
Azure PipelinesIBM WebSphereJBossNginxWebLogic
e5d4a8c Jun 2021 — Apr 2022

Junior Linux Administrator @ Kyndryl

📍 Wrocław

Deep-dived into Red Hat Linux systems administration. Progressed from L2 to L3 support, handling complex Unix system issues in production environments.

Changes:
  • + Progressed from L2 to L3 support through demonstrated expertise
  • + Mastered BASH scripting, disk management, LVM, networking, and VLAN configuration
  • + Supported production environments with high-availability requirements
Red Hat LinuxBashLVMNetworkingVLAN

Incident Reports
War stories from production

Timeline

14:32 UTC PagerDuty alert: 90% of compute fleet unavailable
14:35 UTC Identified all spot instances reclaimed simultaneously in eu-west-1a
14:41 UTC Triggered on-demand fallback, began redistributing across AZs
14:55 UTC Full capacity restored, all services healthy

Root Cause

Entire spot fleet was provisioned in a single availability zone (eu-west-1a). AWS reclaimed all spot capacity in that AZ during a regional demand spike, causing simultaneous termination of all instances.

Resolution

Implemented multi-AZ spread constraints in ASG configuration. Added on-demand base capacity (20%) as fallback. Deployed capacity rebalancing with mixed instance policies across 3+ instance families.

23min MTTR
12 Affected Services

Lessons Learned

  • Never concentrate spot capacity in a single AZ — diversify across at least 3
  • Maintain an on-demand baseline for critical workloads
  • Spot interruption notices (2min) are not enough time for graceful failover without pre-provisioned capacity

Timeline

06:00 UTC Began dual-write phase: Prometheus → Mimir + existing TSDB
08:30 UTC Validated query parity across 47 critical dashboards
12:00 UTC Cutover read path to Mimir, Prometheus demoted to write-only
18:00 UTC Full migration complete, Prometheus decommissioned

Root Cause

Self-hosted Prometheus hitting vertical scaling limits at 2M+ active time series. Single-node architecture created a SPOF for all observability. Needed horizontal scalability and long-term storage.

Resolution

Deployed Grafana Mimir in microservices mode with S3 backend. Used dual-write strategy during migration to ensure zero data loss. Implemented recording rules to reduce cardinality by 35%.

0min MTTR
0 Affected Services

Lessons Learned

  • Dual-write migrations eliminate the 'big bang' cutover risk
  • Always validate dashboard query parity before switching read paths
  • Recording rules should be implemented proactively, not as a migration afterthought

Timeline

09:17 UTC Multiple services report 403 Forbidden from upstream APIs
09:19 UTC Correlated with Cloudflare WAF rule deployment 4min prior
09:22 UTC Identified overly aggressive rate limiting rule blocking internal service mesh traffic
09:28 UTC Rolled back WAF rule, services recovered immediately

Root Cause

A new Cloudflare WAF rate limiting rule was deployed without exemptions for internal service-to-service traffic. The rule's threshold was set too low, treating legitimate internal API calls as abuse.

Resolution

Built a WAF rule testing pipeline that validates rules against recorded production traffic patterns before deployment. Implemented canary deployments for security policies — new rules deploy to 5% of traffic first with automated rollback on error rate spikes.

11min MTTR
8 Affected Services

Lessons Learned

  • Security policies need the same CI/CD rigor as application code
  • Internal service traffic must be explicitly allowlisted in WAF rules
  • Canary deployments aren't just for apps — security policies benefit equally

Infrastructure Map
How the systems connect

Users / Internet Cloudflare Edge WAF, CDN, DNS, Zero-Trust GitHub Source Code GitHub Actions Build, Test, Scan Container Registry Azure ACR AKS Cluster Ingress Controller (NGINX) Application Workloads Helm Charts / ArgoCD GitOps production Observability Prometheus → Mimir Fluent Bit → Loki Grafana (dashboards) Alertmanager Data Layer PostgreSQL (primary + replica) MSSQL, MySQL Secrets & Policies HashiCorp Vault Network Policies, RBAC Infrastructure as Code Terraform Ansible
Request Flow
Deploy Flow
Monitoring
Security
Data Flow
Provisioning
🌐
End Users HTTPS, HTTP/2
🛡
Cloudflare Edge WAF, CDN, DNS, DDoS, Zero-Trust
AKS Cluster Kubernetes, NGINX Ingress, Helm, ArgoCD, HPA production
🔄
CI/CD Pipeline GitHub, GitHub Actions, Azure ACR
📊
Observability Prometheus, Mimir, Loki, Fluent Bit, Grafana, Alertmanager
🗄
Data Layer PostgreSQL, MSSQL, MySQL
🔒
Secrets & Policies HashiCorp Vault, Network Policies, RBAC, mTLS
🏗
Infrastructure as Code Terraform, Ansible, Azure Resource Manager

terraform plan
Infrastructure I've Built

terraform plan — main.tf

Terraform will perform the following actions:

+ resource "infrastructure" "monitoring_stack" running
+ name = "Production Monitoring Stack"
+ description = "End-to-end observability platform built from scratch: Prometheus for metrics, Grafana Mimir for long-term storage, Loki for logs, Fluent Bit for log forwarding, and Grafana for visualization."
+ tech_stack = ["Prometheus", "Grafana Mimir", "Loki", "Fluent Bit", "Grafana"]
+ resource "platform" "kubernetes_platform" running
+ name = "Kubernetes Platform on AKS"
+ description = "Production-grade Kubernetes platform on Azure AKS with Helm charts, ArgoCD for GitOps deployments, and automated scaling with spot instances for cost optimization."
+ tech_stack = ["AKS", "Helm", "ArgoCD", "Terraform", "Docker"]
+ resource "automation" "cicd_pipelines" running
+ name = "CI/CD Pipeline Automation"
+ description = "Comprehensive CI/CD system using GitHub Actions for build/test automation and ArgoCD for continuous deployment, with automated security scanning and artifact management."
+ tech_stack = ["GitHub Actions", "ArgoCD", "Docker", "ACR"]
+ resource "security" "security_layer" running
+ name = "Zero-Trust Security Layer"
+ description = "Multi-layered security architecture with Cloudflare WAF, rate limiting, HashiCorp Vault for secrets management, and network policies for Kubernetes workloads."
+ tech_stack = ["Cloudflare", "HashiCorp Vault", "Kubernetes Network Policies"]

Plan: 4 to add, 0 to change, 0 to destroy.

How This Site Was Built
This portfolio is a DevOps project

Framework Astro 5.x
Static Site Generator

Zero JS by default, islands architecture for interactive components. Build time: <1s.

Hosting Cloudflare Pages
Global CDN

Global CDN, automatic SSL, preview deploys on every push. Free tier.

Animations GSAP + ScrollTrigger
Scroll-driven

Scroll-driven animations, 45KB gzipped total JS bundle.

Design Vanilla CSS + Custom Properties
No framework

No CSS framework. Light/dark theme via CSS variables. 10KB gzipped CSS.

Deployment Pipeline

source GitHub
build GitHub Actions
deploy Cloudflare Pages
live adamkierat.pl
Lighthouse 95+ HTML 110KB JS 47KB gzip CSS 10KB gzip Build <1s
View source on GitHub

kubectl exec -it
Let's connect

kubectl exec -it adam -- /bin/bash
Welcome to Adam Kierat's terminal.
Type help to see available commands.
 
adam@portfolio:~$