Adam Kierat
I build the infrastructure that never makes the headlines.
Tech Lead / Cloud Architecture / Platform Engineering
Building infrastructure
from the ground up
I'm an Infrastructure Tech Lead based in Gliwice, Poland, with a passion for turning complex infrastructure challenges into elegant, automated solutions.
At Beesafe, I architected the entire cloud platform from scratch and now lead all technical decisions across infrastructure, security, and deployment strategy. I don't just configure tools; I design systems that scale, self-heal, and let development teams ship faster.
My journey started with Linux systems administration, progressed through
enterprise middleware at ING, and evolved into full-stack DevOps. I think
in terms of terraform plan, dream in YAML, and measure success in uptime.
name: Adam Kierat
role: Infrastructure Tech Lead
location: Gliwice, Poland
education:
degree: BSc Computer Science
university: Silesian University of Technology
languages:
- Polish # native
- English # C1
interests:
- Cloud Architecture
- Platform Engineering
- Infrastructure Automation
status: operational # 99.9% uptime
Numbers That Matter
Quantified results from production
Credentials
git log
--oneline --graph
Infrastructure Tech Lead @ Beesafe
Warszawa HEAD
Promoted from DevOps Engineer to Tech Lead. Architected the entire cloud platform from zero and now lead all technical decisions across infrastructure, security, and deployment strategy for the platform serving 30+ production microservices.
- + Architected the entire cloud platform from zero — AKS clusters, GitOps pipelines, observability stack, secrets management serving 30+ microservices
- + Led technical decision-making across infrastructure, security, and deployment strategy for all development teams
- + Reduced Azure cloud spend by ~30% through spot instance strategies, reserved capacity, and resource right-sizing
- + Designed zero-trust security architecture with Cloudflare WAF, network segmentation, and Vault — zero security incidents since implementation
- + Increased deployment frequency from weekly manual releases to 15+ automated deploys per day via ArgoCD and Terraform
Junior DevOps Engineer @ ING Hubs Poland
Katowice
Managed production, staging, and development environments across enterprise middleware. Led migration of customer applications from on-premise infrastructure to ING Private Cloud.
- + Migrated customer applications from on-premise to ING Private Cloud
- + Built and maintained Azure Pipelines for CI/CD automation
- + Managed enterprise middleware: IBM WebSphere, JBoss, WebLogic, Apache Tomcat
Junior Linux Administrator @ Kyndryl
Wrocław
Deep-dived into Red Hat Linux systems administration. Progressed from L2 to L3 support, handling complex Unix system issues in production environments.
- + Progressed from L2 to L3 support through demonstrated expertise
- + Mastered BASH scripting, disk management, LVM, networking, and VLAN configuration
- + Supported production environments with high-availability requirements
Incident Reports
War stories from production
Timeline
Root Cause
Entire spot fleet was provisioned in a single availability zone (eu-west-1a). AWS reclaimed all spot capacity in that AZ during a regional demand spike, causing simultaneous termination of all instances.
Resolution
Implemented multi-AZ spread constraints in ASG configuration. Added on-demand base capacity (20%) as fallback. Deployed capacity rebalancing with mixed instance policies across 3+ instance families.
Lessons Learned
- → Never concentrate spot capacity in a single AZ — diversify across at least 3
- → Maintain an on-demand baseline for critical workloads
- → Spot interruption notices (2min) are not enough time for graceful failover without pre-provisioned capacity
Timeline
Root Cause
Self-hosted Prometheus hitting vertical scaling limits at 2M+ active time series. Single-node architecture created a SPOF for all observability. Needed horizontal scalability and long-term storage.
Resolution
Deployed Grafana Mimir in microservices mode with S3 backend. Used dual-write strategy during migration to ensure zero data loss. Implemented recording rules to reduce cardinality by 35%.
Lessons Learned
- → Dual-write migrations eliminate the 'big bang' cutover risk
- → Always validate dashboard query parity before switching read paths
- → Recording rules should be implemented proactively, not as a migration afterthought
Timeline
Root Cause
A new Cloudflare WAF rate limiting rule was deployed without exemptions for internal service-to-service traffic. The rule's threshold was set too low, treating legitimate internal API calls as abuse.
Resolution
Built a WAF rule testing pipeline that validates rules against recorded production traffic patterns before deployment. Implemented canary deployments for security policies — new rules deploy to 5% of traffic first with automated rollback on error rate spikes.
Lessons Learned
- → Security policies need the same CI/CD rigor as application code
- → Internal service traffic must be explicitly allowlisted in WAF rules
- → Canary deployments aren't just for apps — security policies benefit equally
Infrastructure Map
How the systems connect
terraform plan
Infrastructure I've Built
Terraform will perform the following actions:
Plan: 4 to add, 0 to change, 0 to destroy.
How This Site Was Built
This portfolio is a DevOps project
Zero JS by default, islands architecture for interactive components. Build time: <1s.
Global CDN, automatic SSL, preview deploys on every push. Free tier.
Scroll-driven animations, 45KB gzipped total JS bundle.
No CSS framework. Light/dark theme via CSS variables. 10KB gzipped CSS.
Deployment Pipeline
kubectl exec -it
Let's connect
Prefer the traditional way? Here are my coordinates: