<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://www.cybershu.eu/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.cybershu.eu/" rel="alternate" type="text/html" hreflang="en" /><updated>2026-03-04T10:09:38+01:00</updated><id>https://www.cybershu.eu/feed.xml</id><title type="html">CyberShu</title><subtitle>Generalist Blog IT
</subtitle><author><name>Michał Mazur</name></author><entry xml:lang="en"><title type="html">Why I Refuse to Ditch Line-by-Line Code Reviews for AI</title><link href="https://www.cybershu.eu/ai-code-review.html" rel="alternate" type="text/html" title="Why I Refuse to Ditch Line-by-Line Code Reviews for AI" /><published>2026-03-04T00:00:00+01:00</published><updated>2026-03-04T00:00:00+01:00</updated><id>https://www.cybershu.eu/ai-code-review</id><content type="html" xml:base="https://www.cybershu.eu/ai-code-review.html"><![CDATA[<p>In the rush to automate everything, traditional code reviews are becoming an endangered species. Why spend hours reading code when an LLM can scan it in seconds? It sounds tempting, but relying solely on AI for reviews might be a costly mistake.</p>

<p>I wouldn’t walk away from developer-led, line-by-line code review, because:</p>

<ol>
  <li>
    <p>For a review to be actually good, it needs multiple perspectives: technical correctness, adherence to coding standards, business correctness (logic), and architecture. I doubt a single prompt can cover all of that.</p>
  </li>
  <li>
    <p>To do a proper review, an AI agent would have to analyze dependencies, understand requirements, build code/logic diagrams, and <em>then</em> review—otherwise you’d need to dump half the repo into context and it’ll blow up in your face. As the saying goes: “It’s a recipe for disaster.”</p>
  </li>
</ol>

<!--more-->

<p>I see AI review as support—maybe even a replacement for small changes—but it won’t replace the eyes of an experienced developer who knows the domain and the system.</p>

<p>The bigger problem I see is organizational culture. AI spits out code in large volumes, and (generally) devs won’t thoroughly review PRs. And we, as developers, will be—are—pushed harder and harder to produce more and more code, faster, using AI. Today we’re told to use one agent; soon we’ll be expected to run at least five in parallel.</p>]]></content><author><name>Michał Mazur</name></author><category term="ai" /><category term="code-review" /><category term="coding" /><category term="software-engineering" /><summary type="html"><![CDATA[In the rush to automate everything, traditional code reviews are becoming an endangered species. Why spend hours reading code when an LLM can scan it in seconds? It sounds tempting, but relying solely on AI for reviews might be a costly mistake. I wouldn’t walk away from developer-led, line-by-line code review, because: For a review to be actually good, it needs multiple perspectives: technical correctness, adherence to coding standards, business correctness (logic), and architecture. I doubt a single prompt can cover all of that. To do a proper review, an AI agent would have to analyze dependencies, understand requirements, build code/logic diagrams, and then review—otherwise you’d need to dump half the repo into context and it’ll blow up in your face. As the saying goes: “It’s a recipe for disaster.”]]></summary></entry><entry xml:lang="en"><title type="html">Vibe coding might be very hard - presentation</title><link href="https://www.cybershu.eu/vibe-coding-might-be-very-hard.html" rel="alternate" type="text/html" title="Vibe coding might be very hard - presentation" /><published>2025-06-15T00:00:00+02:00</published><updated>2025-06-15T00:00:00+02:00</updated><id>https://www.cybershu.eu/vibe-coding</id><content type="html" xml:base="https://www.cybershu.eu/vibe-coding-might-be-very-hard.html"><![CDATA[<p>Building apps with AI is easier than ever and ever.</p>

<p>Doing it right? Still hard. It still requires knowledge and skills. But it’s easier than ever.</p>

<!--more-->
<p>I put together a few practical tips for builders of new era.</p>

<h2 id="presentation">Presentation</h2>
<embed src="../assets/docs/li-vibe-coding.pdf" type="application/pdf" width="100%" height="600px" />

<p><a href="../assets/docs/li-vibe-coding.pdf">Download PDF</a></p>

<h2 id="fast-tips-for-you">Fast tips for you</h2>
<h3 id="security">Security</h3>

<ul>
  <li><strong>Keep API keys secret:</strong> Never hardcode keys in your app. Store secrets in environment variables or a free secrets manager to keep them safe. 1Password has some interesting features that are worth looking at!</li>
  <li><strong>Scan for vulnerabilities:</strong> Run a free Snyk scan (or enable GitHub Dependabot) to catch known security flaws in your code/dependencies early.</li>
  <li><strong>HTTPS everywhere:</strong> Always serve your app over HTTPS. Tools like Let’s Encrypt give free SSL certificates!</li>
  <li><strong>Lock your accounts:</strong> Enable 2-factor authentication on critical services (GitHub, cloud platforms, AI tools etc.). It’s free and prevents easy hacks on your accounts.</li>
  <li><strong>Set hard budget limits</strong> in all tools that charge for usage, like Chat-GPT. It might save you a lot of money!</li>
</ul>

<h3 id="version-control">Version Control</h3>
<ul>
  <li><strong>Stop emailing zip files:</strong> Use Git for version control. It tracks every change so you can undo mistakes and never lose code again.</li>
  <li><strong>Free cloud backup:</strong> Push your code to GitHub or GitLab (free private repos) for safekeeping. If your laptop crashes, your code stays safe in the cloud.</li>
  <li><strong>Branch for safety:</strong> Experiment on a new branch instead of directly on main. Merge when it’s working, or delete it if not — no harm done to your main codebase.</li>
  <li><strong>GitHub Desktop helps:</strong> If the command line scares you, use the free GitHub Desktop app. It’s a point-and-click way to commit, push, and pull code – no terminal needed. Most coding editors have already build-in graphic interface for GIT.</li>
  <li><strong>Automate testing &amp; deploys:</strong> Use GitHub Actions (free) to run checks or deploy your app on every push. Automation catches issues early and saves manual effort.</li>
</ul>

<h3 id="deployment">Deployment</h3>
<ul>
  <li><strong>One-click hosting:</strong> Deploy your app on platforms like Railway Vercel. You just push code and chill.</li>
  <li><strong>No server? No problem:</strong> Firebase offers hosting, database, and auth with a free plan. Great way to get backend features without managing any servers.</li>
  <li><strong>Static site hero:</strong> Use Netlify, Vercel, or GitHub Pages for a static website or frontend app. They’re free – just connect your repo or drop your files to go live.</li>
  <li><strong>Continuous deploy:</strong> Link your code repo to your host (Railway, Vercel, etc.). Every time you push changes, the service auto-deploys your app. No more manual uploads!</li>
  <li><strong>Serverless functions:</strong> Need a bit of backend logic? Use cloud functions on a free tier (Firebase Functions, Vercel,  Netlify, AWS, GCP, Azure). Your code runs on demand.</li>
</ul>

<h3 id="rules">Rules</h3>
<ul>
  <li><strong>Split project into smaller tasks</strong> – Easier to manage, debug, and parallelize.</li>
  <li><strong>Write detailed documentation first</strong> – Force model/agent to follow it. Include goals, edge cases, and examples.</li>
  <li><strong>Monitor token and compute costs</strong> – Large files or loops can explode usage. If your app is using AI, then integrate your app with LangSmith or a similar observability tool.</li>
  <li><strong>Limit workspace scope</strong> – Restrict to relevant directories or files only for each task.</li>
  <li><strong>Define clear input/output format</strong> – Avoid ambiguity. Keep structure strict and predictable. Define all details and rules in each prompt.</li>
  <li><strong>Use isolated test cases</strong> – Help the model validate logic step-by-step. Ask the model to create tests according to BDD (Behavior Driven Development).</li>
  <li><strong>Fail fast, retry smart</strong> – Catch issues early, and design fallback or retry logic.</li>
</ul>

<h2 id="wrap-up">Wrap-up</h2>
<p>I hope the article at least intrigued you, you learned something new. If you would like to see more of such articles, please let me know!</p>

<h2 id="contact">Contact</h2>
<p>Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on:</p>
<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>You can also find my posts on my second blog <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="vibe-coding" /><category term="coding" /><category term="cursor" /><category term="llm" /><category term="ai" /><category term="devops" /><category term="security" /><category term="practises" /><category term="presentation" /><category term="cloud" /><category term="tools" /><summary type="html"><![CDATA[Building apps with AI is easier than ever and ever. Doing it right? Still hard. It still requires knowledge and skills. But it’s easier than ever.]]></summary></entry><entry xml:lang="en"><title type="html">No More PowerPoint - Use Marp</title><link href="https://www.cybershu.eu/no-more-powerpoint.html" rel="alternate" type="text/html" title="No More PowerPoint - Use Marp" /><published>2025-02-19T00:00:00+01:00</published><updated>2025-02-19T00:00:00+01:00</updated><id>https://www.cybershu.eu/no-more-powerpoint</id><content type="html" xml:base="https://www.cybershu.eu/no-more-powerpoint.html"><![CDATA[<p>Last week, I was tasked with giving a presentation on “Dependabot” - a tool for automatically updating dependencies in projects.</p>

<p>After the last presentations, I felt tired of PowerPoint and decided to do something different.
<!--more--></p>

<h2 id="why-i-got-tired-of-powerpoint">Why I got tired of PowerPoint</h2>

<p>Form, form, form. I was tired of the form.
PowerPoint forces to focus on form, layouts, colors, and pixels etc.
I don’t like frontend development, I’m not a designer.
I prefer to focus on the content, not the layout, pixels.
I a presentation that was simple and minimalistic.</p>

<p><img src="../assets/images/posts/no-more-powerpoint/meme-the-office.jpg" alt="The Office meme" class="image image--md" style="display: block; margin-left: auto; margin-right: auto;" /></p>

<h2 id="marp---a-simple-solution">Marp - a simple solution</h2>

<p>I used Markdown - a text format for creating presentations. Without using any AI :P</p>

<p>It’s simple because all you need is a Markdown text file and a tool to display such a file, such as the <code class="language-plaintext highlighter-rouge">Marp for VS Code</code> plugin.
Then you can export the presentation to HTML, PDF. IF you don’t use visual studio code, then you can use for example CLI to generate a presentation in PDF/HTML format.</p>

<p>The simplicity of this solution convinced me.</p>

<h2 id="example-of-marp-presentation">Example of Marp presentation</h2>

<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">---</span>
<span class="na">marp</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">theme</span><span class="pi">:</span> <span class="s">default</span>
<span class="na">paginate</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">backgroundColor</span><span class="pi">:</span> <span class="c1">#fff</span>
<span class="na">style</span><span class="pi">:</span> <span class="pi">|</span>
  <span class="s">.columns {</span>
    <span class="s">display: grid;</span>
    <span class="s">grid-template-columns: repeat(2, minmax(0, 1fr));</span>
    <span class="s">gap: 1rem;</span>
  <span class="s">}</span>
  <span class="s">.small-text {</span>
    <span class="s">font-size: 0.75em;</span>
  <span class="s">}</span>
  <span class="s">.center {</span>
    <span class="s">text-align: center;</span>
  <span class="s">}</span>

  <span class="s">header,</span>
  <span class="s">footer {</span>
    <span class="s">padding:20px</span>
  <span class="s">}</span>

  <span class="s">img {</span>
    <span class="s">max-width: 100%;</span>
    <span class="s">max-height: 100%;</span>
  <span class="s">}</span>
<span class="s">---</span>

<span class="gh"># :popcorn:Dependabot: Automated Dependency Updates</span>
<span class="p">![</span><span class="nv">bg right</span><span class="p">](</span><span class="sx">./dp_logo.png</span><span class="p">)</span>

<span class="nt">&lt;small&gt;</span>
Michał Mazur
<span class="nt">&lt;/small&gt;</span>
<span class="p">
---</span>
<span class="c">&lt;!-- headingDivider: 0--&gt;</span><span class="sb">


</span><span class="gh"># What is Dependabot? 🤖</span>

Imagine having a dedicated team member who:
<span class="p">
-</span> Never sleeps and its fully automated
<span class="p">-</span> Checks your dependencies 24/7
<span class="p">-</span> Creates perfect pull requests
<span class="p">-</span> Knows about security issues before you do
<span class="p">-</span> Supports multiple ecosystems:
<span class="p">  -</span> github actions, npm, pip, Maven, NuGet, Bundler...
<span class="p">
---
</span>
<span class="gh"># Short Live Demo Time! 🎬</span>
</code></pre></div></div>

<p>In a result you can get sth like that in PDF format:
<a href="../assets/images/posts/no-more-powerpoint/marp_example.pdf">pdf</a></p>

<h2 id="links">Links</h2>
<ul>
  <li><strong>Marpit:</strong> <a href="https://marpit.marp.app/">https://marpit.marp.app/</a></li>
  <li><strong>Marp for VS Code:</strong> <a href="https://marketplace.visualstudio.com/items?itemName=marp-team.marp-vscode">https://marketplace.visualstudio.com/items?itemName=marp-team.marp-vscode</a></li>
</ul>

<h2 id="wrap-up">Wrap-up</h2>
<p>I hope the article at least intrigued you, you learned something new.</p>

<h2 id="contact">Contact</h2>
<p>Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on:</p>
<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>You can also find my posts on my second blog <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="marp" /><category term="power-point" /><category term="presenation" /><category term="visual" /><category term="tool" /><category term="markdown" /><category term="pitch" /><category term="minimalism" /><category term="common-mark" /><summary type="html"><![CDATA[Last week, I was tasked with giving a presentation on “Dependabot” - a tool for automatically updating dependencies in projects. After the last presentations, I felt tired of PowerPoint and decided to do something different.]]></summary></entry><entry xml:lang="en"><title type="html">How to create new docker image tag without push on JFrog and AWS ECR</title><link href="https://www.cybershu.eu/new-tag-without-push-jfrog-aws-ecr.html" rel="alternate" type="text/html" title="How to create new docker image tag without push on JFrog and AWS ECR" /><published>2024-12-20T00:00:00+01:00</published><updated>2024-12-20T00:00:00+01:00</updated><id>https://www.cybershu.eu/new-tag-without-push</id><content type="html" xml:base="https://www.cybershu.eu/new-tag-without-push-jfrog-aws-ecr.html"><![CDATA[<p>Hi  ✋,</p>

<p>I was recently working on signing Docker images and found a performance issue. In order to sign the image,
I had to pull the image from the registry, calculate the hash and create the signature file, and push the new tag to artifactory. This process took a lot of time, especially if the image was large. I was looking for a way to optimise this process.
 <!--more-->
I found a hack to remotely create new tags on JFrog and AWS ECR docker repositories without pulling and pushing the image. It’s a simple trick, but it can save you a lot of time. I will show you an example of how to do it on Github Actions</p>

<h2 id="classic-method">Classic method</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker pull artifactory.example.com/my-image:1.0.0
docker tag artifactory.example.com/my-image:1.0.0 artifactory.example.com/my-image:1.0.1
docker push artifactory.example.com/my-image:1.0.1
</code></pre></div></div>

<h2 id="jfrog">JFrog</h2>

<p>Below exampl pipeline steps uses JFrog API to fetch the image manifest and add new tag to the image <code class="language-plaintext highlighter-rouge">$</code> tagged <code class="language-plaintext highlighter-rouge">$</code>.</p>

<h2 id="pipeline-configuration">Pipeline configuration</h2>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">env</span><span class="pi">:</span>
  <span class="na">DEVHUB_REGISTRY_URL</span><span class="pi">:</span> <span class="s">artifactory.example.com</span> <span class="c1"># JFrog registry url</span>
  <span class="na">DEVHUB_REPO_NAME</span><span class="pi">:</span> <span class="s">docker</span> <span class="c1"># JFrog docker repository name</span>
  <span class="na">IMAGE_NAME</span><span class="pi">:</span> <span class="s">my-image</span> <span class="c1"># Docker image name</span>
  <span class="na">SEMVER</span><span class="pi">:</span> <span class="s">1.0.0</span> <span class="c1"># Current image tag</span>
  <span class="na">TARGET_SEMVER</span><span class="pi">:</span> <span class="s">1.0.1</span> <span class="c1"># New image tag</span>
</code></pre></div></div>

<p><strong>Secrets:</strong></p>
<ul>
  <li>ARTIFACTORY_USERNAME</li>
  <li>ARTIFACTORY_TOKEN</li>
</ul>

<h3 id="pipeline-steps">Pipeline steps</h3>
<script src="https://gist.github.com/da298ab933908a3d0610e75b18a513d9.js?file=jfrog-steps.yml"> </script>

<h2 id="aws-ecr">AWS ECR</h2>

<h3 id="pipeline-configuration-1">Pipeline configuration</h3>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">env</span><span class="pi">:</span>
  <span class="na">AWS_REGION</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span> <span class="c1"># AWS region name i.e us-east-1</span>
  <span class="na">AWS_WORKFLOW_ROLE</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span> <span class="c1"># AWS role arn</span>
  <span class="na">AWS_ROLE_SESSION_NAME</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span> <span class="c1"># AWS role session name</span>

  <span class="na">IMAGE_NAME</span><span class="pi">:</span> <span class="s">my-image</span> <span class="c1"># Docker image name</span>
  <span class="na">SEMVER</span><span class="pi">:</span> <span class="s">1.0.0</span> <span class="c1"># Current image tag</span>
  <span class="na">TARGET_SEMVER</span><span class="pi">:</span> <span class="s">1.0.1</span> <span class="c1"># New image tag</span>
</code></pre></div></div>

<h3 id="pipeline-steps-1">Pipeline steps</h3>
<p class="warning"><strong>Warning:</strong>
Authentication with AWS ECR requires AWS credentials. Please make sure you have configured AWS credentials in your pipeline. My example uses one of the possible methods to configure AWS credentials in GitHub Actions.</p>

<script src="https://gist.github.com/da298ab933908a3d0610e75b18a513d9.js?file=ecr-steps.yml"> </script>

<h2 id="wrap-up">Wrap-up</h2>
<p>I hope the article at least intrigued you, you learned something new.</p>

<h2 id="contact">Contact</h2>
<p>Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on:</p>
<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>You can also find my posts on my second blog <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="docker" /><category term="jfrog" /><category term="artifactory" /><category term="ecr" /><category term="aws" /><category term="tag" /><category term="push" /><category term="pull" /><category term="image" /><category term="container" /><category term="registry" /><summary type="html"><![CDATA[Hi ✋, I was recently working on signing Docker images and found a performance issue. In order to sign the image, I had to pull the image from the registry, calculate the hash and create the signature file, and push the new tag to artifactory. This process took a lot of time, especially if the image was large. I was looking for a way to optimise this process.]]></summary></entry><entry xml:lang="en"><title type="html">What is a work log and how to use it to speed up your career?</title><link href="https://www.cybershu.eu/work-log-career-10x.html" rel="alternate" type="text/html" title="What is a work log and how to use it to speed up your career?" /><published>2024-10-11T00:00:00+02:00</published><updated>2024-10-11T00:00:00+02:00</updated><id>https://www.cybershu.eu/work-log</id><content type="html" xml:base="https://www.cybershu.eu/work-log-career-10x.html"><![CDATA[<h1 id="what-is-a-work-log-and-how-to-use-it-to-speed-up-your-career">What is a work log and how to use it to speed up your career?</h1>

<p>As I was diving into “The Software Engineer’s Guidebook,” by Gergely Orosz I stumbled upon a game-changing tip: keeping a work log.</p>

<p>This practice has been a revelation for me—it’s more than just tracking daily tasks. A work log helps you:</p>

<p>✅ Track Your Progress: Stay on top of your projects and see how far you’ve come.</p>

<p>🎉 Celebrate Wins: Big or small, every achievement counts. Documenting them keeps your morale high.</p>

<p>🔄 Learn from Mistakes: Reflect on challenges and turn them into growth opportunities.</p>

<p>💼 Prepare for Promotions: When it’s time to discuss your next career move, having a detailed record of your contributions is invaluable.</p>

<p>Want to start your own work log? I’ve put together a simple template to get you going:</p>

<h2 id="my-weekly-template">My weekly template</h2>

<div class="language-md highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">-</span> What work do I feel most proud of?
<span class="p">-</span> Are there themes in these projects I should be thinking about? What’s the big picture of what I’m working on? (am I working a lot on security? localization?).
<span class="p">-</span> What do I wish I was doing more / less of?
<span class="p">-</span> Which of my projects had the effect I wanted, and which didn’t? Why might that have been?
<span class="p">-</span> What could have gone better with project X? What might I want to do differently next time?

<span class="gh"># Developer Work Log</span>

<span class="gu">## Week of [Month Day, Year]</span>

<span class="gu">### Goals</span>
<span class="p">-</span> [ ]
<span class="p">-</span> [ ]
<span class="p">-</span> [ ]<span class="sb">


</span><span class="gu">### Monday, [Date]</span>
<span class="gu">#### Tasks</span>
<span class="p">-</span> [ ]

<span class="gu">#### 🚀 Achievements</span>
<span class="p">-</span>

<span class="gu">#### 🛠️ Challenges</span>
<span class="p">-</span>

<span class="gu">#### 💡 Learnings</span>
<span class="p">-</span>

<span class="gu">#### 🎯 Goals for Tomorrow</span>
<span class="p">-</span><span class="sb">


</span><span class="gu">### Tuesday, [Date]</span>
<span class="gu">#### Tasks</span>
<span class="p">-</span> [ ]

<span class="gu">#### 🚀 Achievements</span>
<span class="p">-</span>

<span class="gu">#### 🛠️ Challenges</span>
<span class="p">-</span>

<span class="gu">#### 💡 Learnings</span>
<span class="p">-</span>

<span class="gu">#### 🎯 Goals for Tomorrow</span>
<span class="p">-</span><span class="sb">



</span><span class="gu">### Wednesday, [Date]</span>
<span class="gu">#### Tasks</span>
<span class="p">-</span> [ ]

<span class="gu">#### 🚀 Achievements</span>
<span class="p">-</span>

<span class="gu">#### 🛠️ Challenges</span>
<span class="p">-</span>

<span class="gu">#### 💡 Learnings</span>
<span class="p">-</span>

<span class="gu">#### 🎯 Goals for Tomorrow</span>
<span class="p">-</span>

<span class="gu">### Thursday,  [Date]</span>
<span class="gu">#### Tasks</span>
<span class="p">-</span> [ ]

<span class="gu">#### 🚀 Achievements</span>
<span class="p">-</span>

<span class="gu">#### 🛠️ Challenges</span>
<span class="p">-</span>

<span class="gu">#### 💡 Learnings</span>
<span class="p">-</span>

<span class="gu">#### 🎯 Goals for Tomorrow</span>
<span class="p">-</span><span class="sb">


</span><span class="gu">### Friday, [Date]</span>
<span class="gu">#### Tasks</span>
<span class="p">-</span> [ ]

<span class="gu">#### 🚀 Achievements</span>
<span class="p">-</span>

<span class="gu">#### 🛠️ Challenges</span>
<span class="p">-</span>

<span class="gu">#### 💡 Learnings</span>
<span class="p">-</span>

<span class="gu">#### 🎯 Goals for Tomorrow</span>
<span class="p">-


---
</span>
<span class="gu">## Weekly Summary</span>

<span class="gu">### 🎯 Weekly Goals</span>
<span class="p">-</span> [Goal 1]
<span class="p">-</span> [Goal 2]
<span class="p">-</span> [Goal 3]

<span class="gu">### 🚀 Achievements</span>
<span class="p">-</span> [Major achievement 1]
<span class="p">-</span> [Major achievement 2]
<span class="p">-</span> [Major achievement 3]

<span class="gu">### 🛠️ Challenges</span>
<span class="p">-</span> [Significant challenge 1]
<span class="p">-</span> [Significant challenge 2]
<span class="p">-</span> [Significant challenge 3]

<span class="gu">### 💡 Learnings</span>
<span class="p">-</span> [Key learning 1]
<span class="p">-</span> [Key learning 2]
<span class="p">-</span> [Key learning 3]

<span class="gu">### 📅 Planning for Next Week</span>
<span class="p">-</span> [Objective 1]
<span class="p">-</span> [Objective 2]
<span class="p">-</span> [Objective 3]
</code></pre></div></div>]]></content><author><name>Michał Mazur</name></author><category term="work-log" /><category term="career" /><category term="development" /><category term="work" /><category term="job" /><summary type="html"><![CDATA[What is a work log and how to use it to speed up your career? As I was diving into “The Software Engineer’s Guidebook,” by Gergely Orosz I stumbled upon a game-changing tip: keeping a work log. This practice has been a revelation for me—it’s more than just tracking daily tasks. A work log helps you: ✅ Track Your Progress: Stay on top of your projects and see how far you’ve come. 🎉 Celebrate Wins: Big or small, every achievement counts. Documenting them keeps your morale high. 🔄 Learn from Mistakes: Reflect on challenges and turn them into growth opportunities. 💼 Prepare for Promotions: When it’s time to discuss your next career move, having a detailed record of your contributions is invaluable. Want to start your own work log? I’ve put together a simple template to get you going: My weekly template - What work do I feel most proud of? - Are there themes in these projects I should be thinking about? What’s the big picture of what I’m working on? (am I working a lot on security? localization?). - What do I wish I was doing more / less of? - Which of my projects had the effect I wanted, and which didn’t? Why might that have been? - What could have gone better with project X? What might I want to do differently next time? # Developer Work Log ## Week of [Month Day, Year] ### Goals - [ ] - [ ] - [ ] ### Monday, [Date] #### Tasks - [ ] #### 🚀 Achievements - #### 🛠️ Challenges - #### 💡 Learnings - #### 🎯 Goals for Tomorrow - ### Tuesday, [Date] #### Tasks - [ ] #### 🚀 Achievements - #### 🛠️ Challenges - #### 💡 Learnings - #### 🎯 Goals for Tomorrow - ### Wednesday, [Date] #### Tasks - [ ] #### 🚀 Achievements - #### 🛠️ Challenges - #### 💡 Learnings - #### 🎯 Goals for Tomorrow - ### Thursday, [Date] #### Tasks - [ ] #### 🚀 Achievements - #### 🛠️ Challenges - #### 💡 Learnings - #### 🎯 Goals for Tomorrow - ### Friday, [Date] #### Tasks - [ ] #### 🚀 Achievements - #### 🛠️ Challenges - #### 💡 Learnings - #### 🎯 Goals for Tomorrow - --- ## Weekly Summary ### 🎯 Weekly Goals - [Goal 1] - [Goal 2] - [Goal 3] ### 🚀 Achievements - [Major achievement 1] - [Major achievement 2] - [Major achievement 3] ### 🛠️ Challenges - [Significant challenge 1] - [Significant challenge 2] - [Significant challenge 3] ### 💡 Learnings - [Key learning 1] - [Key learning 2] - [Key learning 3] ### 📅 Planning for Next Week - [Objective 1] - [Objective 2] - [Objective 3]]]></summary></entry><entry xml:lang="en"><title type="html">Building and Running a JAR from the Tests Directory</title><link href="https://www.cybershu.eu/articles/mvn-tests-build-run-jar.html" rel="alternate" type="text/html" title="Building and Running a JAR from the Tests Directory" /><published>2024-07-01T00:00:00+02:00</published><updated>2024-07-01T00:00:00+02:00</updated><id>https://www.cybershu.eu/articles/build-run-jar-from-tests</id><content type="html" xml:base="https://www.cybershu.eu/articles/mvn-tests-build-run-jar.html"><![CDATA[<p>Hello there ✋,</p>

<p>Hello ✋, I was tasked with integrating a Java class for data generation into our
CI/CD pipeline. The goal was to create a shaded JAR that could run directly from
the pipeline.</p>

<p>The challenge was that the class was in the tests directory, which Maven doesn’t
support for building JARs. I resolved this by using the maven-shade-plugin and
maven-resources-plugin to build a shaded JAR, including necessary resources like
SQL schema files.</p>

<p><img src="../assets/images/posts/mvn-tests-jar/whereisfile.png" alt="" /></p>

<p>I hope this example saves you time during development! 🚀</p>

<h2 id="code">Code</h2>

<h3 id="maven">Maven</h3>

<p>This Maven configuration uses two key plugins to build and run a JAR file with
an executable Java class (MainTest) in the tests directory: the Maven Shade
Plugin and the Maven Resources Plugin.</p>

<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
<span class="nt">&lt;project</span> <span class="na">xmlns=</span><span class="s">"http://maven.apache.org/POM/4.0.0"</span>
         <span class="na">xmlns:xsi=</span><span class="s">"http://www.w3.org/2001/XMLSchema-instance"</span>
         <span class="na">xsi:schemaLocation=</span><span class="s">"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"</span><span class="nt">&gt;</span>
  <span class="nt">&lt;modelVersion&gt;</span>4.0.0<span class="nt">&lt;/modelVersion&gt;</span>

  <span class="nt">&lt;groupId&gt;</span>eu.cybershu<span class="nt">&lt;/groupId&gt;</span>
  <span class="nt">&lt;artifactId&gt;</span>mvn-test-jar<span class="nt">&lt;/artifactId&gt;</span>
  <span class="nt">&lt;version&gt;</span>1.0-SNAPSHOT<span class="nt">&lt;/version&gt;</span>

  <span class="nt">&lt;properties&gt;</span>
    <span class="nt">&lt;maven.compiler.source&gt;</span>17<span class="nt">&lt;/maven.compiler.source&gt;</span>
    <span class="nt">&lt;maven.compiler.target&gt;</span>17<span class="nt">&lt;/maven.compiler.target&gt;</span>
    <span class="nt">&lt;project.build.sourceEncoding&gt;</span>UTF-8<span class="nt">&lt;/project.build.sourceEncoding&gt;</span>
  <span class="nt">&lt;/properties&gt;</span>

  <span class="nt">&lt;profiles&gt;</span>
    <span class="nt">&lt;profile&gt;</span>
      <span class="nt">&lt;id&gt;</span>test-jar<span class="nt">&lt;/id&gt;</span>
      <span class="nt">&lt;build&gt;</span>
        <span class="nt">&lt;plugins&gt;</span>
          <span class="nt">&lt;plugin&gt;</span>
            <span class="nt">&lt;groupId&gt;</span>org.apache.maven.plugins<span class="nt">&lt;/groupId&gt;</span>
            <span class="nt">&lt;artifactId&gt;</span>maven-shade-plugin<span class="nt">&lt;/artifactId&gt;</span>
            <span class="nt">&lt;version&gt;</span>3.6.0<span class="nt">&lt;/version&gt;</span>
            <span class="nt">&lt;executions&gt;</span>
              <span class="nt">&lt;execution&gt;</span>
                <span class="nt">&lt;goals&gt;</span>
                  <span class="nt">&lt;goal&gt;</span>shade<span class="nt">&lt;/goal&gt;</span>
                <span class="nt">&lt;/goals&gt;</span>
                <span class="nt">&lt;configuration&gt;</span>
                  <span class="nt">&lt;shadedArtifactAttached&gt;</span>true<span class="nt">&lt;/shadedArtifactAttached&gt;</span>
                  <span class="nt">&lt;transformers&gt;</span>
                    <span class="nt">&lt;transformer</span> <span class="na">implementation=</span>
                                   <span class="s">"org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"</span><span class="nt">&gt;</span>
                      <span class="nt">&lt;mainClass&gt;</span>eu.cybershu.MainTest<span class="nt">&lt;/mainClass&gt;</span>
                    <span class="nt">&lt;/transformer&gt;</span>
                  <span class="nt">&lt;/transformers&gt;</span>
                <span class="nt">&lt;/configuration&gt;</span>
              <span class="nt">&lt;/execution&gt;</span>
            <span class="nt">&lt;/executions&gt;</span>
          <span class="nt">&lt;/plugin&gt;</span>

          <span class="nt">&lt;plugin&gt;</span>
            <span class="nt">&lt;groupId&gt;</span>org.apache.maven.plugins<span class="nt">&lt;/groupId&gt;</span>
            <span class="nt">&lt;artifactId&gt;</span>maven-resources-plugin<span class="nt">&lt;/artifactId&gt;</span>
            <span class="nt">&lt;version&gt;</span>3.3.1<span class="nt">&lt;/version&gt;</span>
            <span class="nt">&lt;executions&gt;</span>
              <span class="nt">&lt;execution&gt;</span>
                <span class="nt">&lt;id&gt;</span>copy-test-classes<span class="nt">&lt;/id&gt;</span>
                <span class="nt">&lt;phase&gt;</span>process-test-classes<span class="nt">&lt;/phase&gt;</span>
                <span class="nt">&lt;goals&gt;</span>
                  <span class="nt">&lt;goal&gt;</span>copy-resources<span class="nt">&lt;/goal&gt;</span>
                <span class="nt">&lt;/goals&gt;</span>
                <span class="nt">&lt;configuration&gt;</span>
                  <span class="nt">&lt;outputDirectory&gt;</span>${project.build.outputDirectory}
                  <span class="nt">&lt;/outputDirectory&gt;</span>
                  <span class="nt">&lt;resources&gt;</span>
                    <span class="nt">&lt;resource&gt;</span>
                      <span class="nt">&lt;directory&gt;</span>${project.build.testOutputDirectory}
                      <span class="nt">&lt;/directory&gt;</span>
                      <span class="nt">&lt;includes&gt;</span>
                        <span class="nt">&lt;include&gt;</span>**/*.class<span class="nt">&lt;/include&gt;</span>
                      <span class="nt">&lt;/includes&gt;</span>
                    <span class="nt">&lt;/resource&gt;</span>
                  <span class="nt">&lt;/resources&gt;</span>
                <span class="nt">&lt;/configuration&gt;</span>
              <span class="nt">&lt;/execution&gt;</span>
            <span class="nt">&lt;/executions&gt;</span>
          <span class="nt">&lt;/plugin&gt;</span>
        <span class="nt">&lt;/plugins&gt;</span>
      <span class="nt">&lt;/build&gt;</span>
    <span class="nt">&lt;/profile&gt;</span>
  <span class="nt">&lt;/profiles&gt;</span>
<span class="nt">&lt;/project&gt;</span>
</code></pre></div></div>

<h4 id="maven-shade-plugin">Maven Shade Plugin</h4>

<p>The Maven Shade Plugin is used to package the project into an executable JAR
file. This plugin performs several tasks, such as combining dependencies into a
single JAR and modifying the manifest file to specify the main class.</p>

<p>Shading phase packs the project into a JAR file that includes dependencies.</p>

<p><strong>Configuration:</strong></p>

<ul>
  <li>shadedArtifactAttached: When set to true, this ensures the shaded JAR is
attached as an additional artifact.</li>
  <li>transformers: This section modifies the JAR manifest. The
ManifestResourceTransformer is used to specify the main class (
eu.cybershu.MainTest).</li>
</ul>

<h4 id="maven-resources-plugin">Maven Resources Plugin</h4>

<p>The Maven Resources Plugin is employed here to ensure that the compiled test
classes are included in the final build output. This is necessary because the
main class (MainTest) resides in the <code class="language-plaintext highlighter-rouge">tests</code> directory.</p>

<p><strong>Key Elements:</strong></p>

<ul>
  <li>Execution ID (<code class="language-plaintext highlighter-rouge">copy-test-classes</code>): Identifies this specific execution of the
plugin.</li>
  <li>Phase (<code class="language-plaintext highlighter-rouge">process-test-classes</code>): Specifies when this plugin should run during
the build lifecycle.</li>
</ul>

<p><strong>Configuration:</strong></p>

<ul>
  <li>outputDirectory: Specifies where the resources should be copied to, typically
the main output directory of the build.</li>
  <li>resources: Defines which resources to copy. Here, it includes all .class files
from the test output directory.</li>
</ul>

<p>By configuring this plugin, the build process ensures that the compiled test
classes (including <code class="language-plaintext highlighter-rouge">MainTest.class</code>) are copied to the main output directory.
This step is crucial for including the MainTest class in the final shaded JAR
file.</p>

<h3 id="java">Java</h3>

<p><strong>MainTest.java - Executable Class</strong></p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MainTest</span> <span class="o">{</span>
  <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="nc">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="o">{</span>
    <span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Let's print a file......"</span><span class="o">);</span>

    <span class="nc">BufferedReader</span> <span class="n">reader</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">BufferedReader</span><span class="o">(</span><span class="k">new</span> <span class="nc">InputStreamReader</span><span class="o">(</span>
      <span class="nc">MainTest</span><span class="o">.</span><span class="na">class</span><span class="o">.</span><span class="na">getClassLoader</span><span class="o">().</span><span class="na">getResourceAsStream</span><span class="o">(</span><span class="s">"any_file.txt"</span><span class="o">)));</span>
    <span class="n">reader</span><span class="o">.</span><span class="na">lines</span><span class="o">().</span><span class="na">forEach</span><span class="o">(</span><span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">::</span><span class="n">println</span><span class="o">);</span>
  <span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div>

<h2 id="build-and-running">Build and running</h2>

<p><strong>How to build:</strong></p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn package <span class="nt">-P</span> test-jar
</code></pre></div></div>

<p><strong>How to run:</strong></p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>java <span class="nt">-jar</span> ./target/mvn-test-jar-1.0-SNAPSHOT-shaded.jar
</code></pre></div></div>

<h2 id="contact">Contact</h2>

<p>I hope you find this useful! 😊 Please leave comments or contact me directly if
you have any questions:</p>

<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>Check out more of my posts on my second
blog: <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="java" /><category term="maven" /><category term="jar" /><category term="jvm" /><category term="build" /><category term="testing" /><summary type="html"><![CDATA[Hello there ✋, Hello ✋, I was tasked with integrating a Java class for data generation into our CI/CD pipeline. The goal was to create a shaded JAR that could run directly from the pipeline. The challenge was that the class was in the tests directory, which Maven doesn’t support for building JARs. I resolved this by using the maven-shade-plugin and maven-resources-plugin to build a shaded JAR, including necessary resources like SQL schema files. I hope this example saves you time during development! 🚀 Code Maven This Maven configuration uses two key plugins to build and run a JAR file with an executable Java class (MainTest) in the tests directory: the Maven Shade Plugin and the Maven Resources Plugin. &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"&gt; &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt; &lt;groupId&gt;eu.cybershu&lt;/groupId&gt; &lt;artifactId&gt;mvn-test-jar&lt;/artifactId&gt; &lt;version&gt;1.0-SNAPSHOT&lt;/version&gt; &lt;properties&gt; &lt;maven.compiler.source&gt;17&lt;/maven.compiler.source&gt; &lt;maven.compiler.target&gt;17&lt;/maven.compiler.target&gt; &lt;project.build.sourceEncoding&gt;UTF-8&lt;/project.build.sourceEncoding&gt; &lt;/properties&gt; &lt;profiles&gt; &lt;profile&gt; &lt;id&gt;test-jar&lt;/id&gt; &lt;build&gt; &lt;plugins&gt; &lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-shade-plugin&lt;/artifactId&gt; &lt;version&gt;3.6.0&lt;/version&gt; &lt;executions&gt; &lt;execution&gt; &lt;goals&gt; &lt;goal&gt;shade&lt;/goal&gt; &lt;/goals&gt; &lt;configuration&gt; &lt;shadedArtifactAttached&gt;true&lt;/shadedArtifactAttached&gt; &lt;transformers&gt; &lt;transformer implementation= "org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"&gt; &lt;mainClass&gt;eu.cybershu.MainTest&lt;/mainClass&gt; &lt;/transformer&gt; &lt;/transformers&gt; &lt;/configuration&gt; &lt;/execution&gt; &lt;/executions&gt; &lt;/plugin&gt; &lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-resources-plugin&lt;/artifactId&gt; &lt;version&gt;3.3.1&lt;/version&gt; &lt;executions&gt; &lt;execution&gt; &lt;id&gt;copy-test-classes&lt;/id&gt; &lt;phase&gt;process-test-classes&lt;/phase&gt; &lt;goals&gt; &lt;goal&gt;copy-resources&lt;/goal&gt; &lt;/goals&gt; &lt;configuration&gt; &lt;outputDirectory&gt;${project.build.outputDirectory} &lt;/outputDirectory&gt; &lt;resources&gt; &lt;resource&gt; &lt;directory&gt;${project.build.testOutputDirectory} &lt;/directory&gt; &lt;includes&gt; &lt;include&gt;**/*.class&lt;/include&gt; &lt;/includes&gt; &lt;/resource&gt; &lt;/resources&gt; &lt;/configuration&gt; &lt;/execution&gt; &lt;/executions&gt; &lt;/plugin&gt; &lt;/plugins&gt; &lt;/build&gt; &lt;/profile&gt; &lt;/profiles&gt; &lt;/project&gt; Maven Shade Plugin The Maven Shade Plugin is used to package the project into an executable JAR file. This plugin performs several tasks, such as combining dependencies into a single JAR and modifying the manifest file to specify the main class. Shading phase packs the project into a JAR file that includes dependencies. Configuration: shadedArtifactAttached: When set to true, this ensures the shaded JAR is attached as an additional artifact. transformers: This section modifies the JAR manifest. The ManifestResourceTransformer is used to specify the main class ( eu.cybershu.MainTest). Maven Resources Plugin The Maven Resources Plugin is employed here to ensure that the compiled test classes are included in the final build output. This is necessary because the main class (MainTest) resides in the tests directory. Key Elements: Execution ID (copy-test-classes): Identifies this specific execution of the plugin. Phase (process-test-classes): Specifies when this plugin should run during the build lifecycle. Configuration: outputDirectory: Specifies where the resources should be copied to, typically the main output directory of the build. resources: Defines which resources to copy. Here, it includes all .class files from the test output directory. By configuring this plugin, the build process ensures that the compiled test classes (including MainTest.class) are copied to the main output directory. This step is crucial for including the MainTest class in the final shaded JAR file. Java MainTest.java - Executable Class public class MainTest { public static void main(String[] args) { System.out.println("Let's print a file......"); BufferedReader reader = new BufferedReader(new InputStreamReader( MainTest.class.getClassLoader().getResourceAsStream("any_file.txt"))); reader.lines().forEach(System.out::println); } } Build and running How to build: mvn package -P test-jar How to run: java -jar ./target/mvn-test-jar-1.0-SNAPSHOT-shaded.jar Contact I hope you find this useful! 😊 Please leave comments or contact me directly if you have any questions: Twitter: https://twitter.com/MichalMzr LinkedIn: https://www.linkedin.com/in/michmzr/ Check out more of my posts on my second blog: Geekowojażer.pl]]></summary></entry><entry xml:lang="en"><title type="html">How to work with Python virtual env in Dockerfile</title><link href="https://www.cybershu.eu/articles/python-dockerfile.html" rel="alternate" type="text/html" title="How to work with Python virtual env in Dockerfile" /><published>2024-05-17T00:00:00+02:00</published><updated>2024-05-17T00:00:00+02:00</updated><id>https://www.cybershu.eu/articles/python-dockerfile</id><content type="html" xml:base="https://www.cybershu.eu/articles/python-dockerfile.html"><![CDATA[<p>If you work with Python, you probably need to activate a virtual environment to manage dependencies. Using a virtualenv requires activation each time you want to use Python or pip. This blog post will show you how to effectively and elegantly work with Python when building a Dockerfile.</p>

<h2 id="method-1-bad">Method 1: Bad</h2>

<p>Let’s say you want to copy and change a shell script into a Dockerfile. The solution below won’t work because of the line RUN . /venv/bin/activate. The virtual environment is activated, but it won’t be available in subsequent commands because RUN commands run in separate processes.</p>

<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> alpine:latest</span>

<span class="k">RUN </span>apk add <span class="nt">--no-cache</span> python3 py3-pip

<span class="c"># Create and use a virtual environment</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> venv /venv

<span class="c"># This is wrong!</span>
<span class="k">RUN </span><span class="nb">.</span> /venv/bin/activate

<span class="k">COPY</span><span class="s"> . .</span>

<span class="k">RUN </span>pip <span class="nb">install</span> <span class="nt">-r</span> ./requirements.txt
<span class="k">RUN </span>python main.py

</code></pre></div></div>

<p>I get build error:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#0 building with "default" instance using docker-container driver

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 276B done
#1 DONE 0.0s

#2 [internal] load metadata for docker.io/library/alpine:latest
#2 DONE 0.2s

#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

#4 [1/7] FROM docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b
#4 resolve docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b done
#4 DONE 0.0s

#5 [internal] load build context
#5 transferring context: 91B done
#5 DONE 0.0s

#6 [2/7] RUN apk add --no-cache python3 py3-pip
#6 CACHED

#7 [3/7] RUN python3 -m venv /venv
#7 CACHED

#8 [4/7] RUN . /venv/bin/activate
#8 CACHED

#9 [5/7] COPY . .
#9 CACHED

#10 [6/7] RUN pip install -r ./requirements.txt
#10 0.587 error: externally-managed-environment
#10 0.587
#10 0.587 × This environment is externally managed
#10 0.587 ╰─&gt;
#10 0.587     The system-wide python installation should be maintained using the system
#10 0.587     package manager (apk) only.
#10 0.587
#10 0.587     If the package in question is not packaged already (and hence installable via
#10 0.587     "apk add py3-somepackage"), please consider installing it inside a virtual
#10 0.587     environment, e.g.:
#10 0.587
#10 0.587     python3 -m venv /path/to/venv
#10 0.587     . /path/to/venv/bin/activate
#10 0.587     pip install mypackage
#10 0.587
#10 0.587     To exit the virtual environment, run:
#10 0.587
#10 0.587     deactivate
#10 0.587
#10 0.587     The virtual environment is not deleted, and can be re-entered by re-sourcing
#10 0.587     the activate file.
#10 0.587
#10 0.587     To automatically manage virtual environments, consider using pipx (from the
#10 0.587     pipx package).
#10 0.587
#10 0.587 note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
#10 0.587 hint: See PEP 668 for the detailed specification.
#10 ERROR: process "/bin/sh -c pip install -r ./requirements.txt" did not complete successfully: exit code: 1
------
 &gt; [6/7] RUN pip install -r ./requirements.txt:
0.587     deactivate
0.587
0.587     The virtual environment is not deleted, and can be re-entered by re-sourcing
0.587     the activate file.
0.587
0.587     To automatically manage virtual environments, consider using pipx (from the
0.587     pipx package).
0.587
0.587 note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
0.587 hint: See PEP 668 for the detailed specification.
------
Dockerfile:13
--------------------
  11 |     COPY . .
  12 |
  13 | &gt;&gt;&gt; RUN pip install -r ./requirements.txt
  14 |     RUN python main.py
  15 |
--------------------
</code></pre></div></div>

<p>The system Python environment is triggered when importing dependencies from requirements.txt, not the created <code class="language-plaintext highlighter-rouge">/venv/</code>!</p>

<h2 id="method-2-ugly">Method 2: Ugly</h2>

<p>You can directly call <code class="language-plaintext highlighter-rouge">/venv/</code>’s Python and pip. It’s a better solution, but it still has caveats. First, you need to do tedious work adding the prefix path. Secondly, when Python triggers a subprocess, it won’t have access to <code class="language-plaintext highlighter-rouge">/venv/</code> dependencies. So, can we do better?</p>

<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> alpine:latest</span>

<span class="k">RUN </span>apk add <span class="nt">--no-cache</span> python3 py3-pip

<span class="c"># Create and use a virtual environment</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> venv /venv

<span class="k">COPY</span><span class="s"> . .</span>

<span class="k">RUN </span>/venv/bin/pip <span class="nb">install</span> <span class="nt">-r</span> ./requirements.txt
<span class="k">RUN </span>/venv/bin/python main.py
</code></pre></div></div>

<h2 id="method-3-good">Method 3: Good</h2>
<p>Yes, we can. The good solution is to add <code class="language-plaintext highlighter-rouge">/venv/bin</code> to the system <code class="language-plaintext highlighter-rouge">PATH</code> environment variable.</p>

<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> alpine:latest</span>

<span class="k">RUN </span>apk add <span class="nt">--no-cache</span> python3 py3-pip

<span class="c"># Create and use a virtual environment</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> venv /venv
<span class="k">ENV</span><span class="s"> PATH="/venv/bin:$PATH"</span>

<span class="k">COPY</span><span class="s"> . .</span>
<span class="k">RUN </span>pip <span class="nb">install</span> <span class="nt">-r</span> ./requirements.txt
<span class="k">RUN </span>python main.py
</code></pre></div></div>

<p>As a result, we achieve an elegant solution!</p>

<p>The virtualenv documentation even states that activating the environment is “purely a convenience.”</p>

<p>If you read the code for activate, it does several things:</p>

<ol>
  <li>Figures out what shell you’re running.</li>
  <li>Adds a deactivate function to your shell, which can interfere with pydoc.</li>
  <li>Changes the shell prompt to include the virtualenv name.</li>
  <li>Unsets the <code class="language-plaintext highlighter-rouge">PYTHONHOME</code> environment variable, if it was set.</li>
  <li>
    <p>Sets two environment variables: <code class="language-plaintext highlighter-rouge">VIRTUAL_ENV</code> and <code class="language-plaintext highlighter-rouge">PATH</code>.</p>
  </li>
  <li>From Docker’s perspective, points 1-4 are irrelevant. We only need to take care of <code class="language-plaintext highlighter-rouge">VIRTUAL_ENV</code> and <code class="language-plaintext highlighter-rouge">PATH</code>, which we can define manually.</li>
</ol>

<h2 id="wrap-up">Wrap-up</h2>
<p>I hope what I wrote is useful :). Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on:</p>
<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>You can also find my posts on my second blog <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="python" /><category term="docker" /><category term="dockerfile" /><category term="venv" /><category term="automation" /><category term="devops" /><summary type="html"><![CDATA[If you work with Python, you probably need to activate a virtual environment to manage dependencies. Using a virtualenv requires activation each time you want to use Python or pip. This blog post will show you how to effectively and elegantly work with Python when building a Dockerfile. Method 1: Bad Let’s say you want to copy and change a shell script into a Dockerfile. The solution below won’t work because of the line RUN . /venv/bin/activate. The virtual environment is activated, but it won’t be available in subsequent commands because RUN commands run in separate processes. FROM alpine:latest RUN apk add --no-cache python3 py3-pip # Create and use a virtual environment RUN python3 -m venv /venv # This is wrong! RUN . /venv/bin/activate COPY . . RUN pip install -r ./requirements.txt RUN python main.py I get build error: #0 building with "default" instance using docker-container driver #1 [internal] load build definition from Dockerfile #1 transferring dockerfile: 276B done #1 DONE 0.0s #2 [internal] load metadata for docker.io/library/alpine:latest #2 DONE 0.2s #3 [internal] load .dockerignore #3 transferring context: 2B done #3 DONE 0.0s #4 [1/7] FROM docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b #4 resolve docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b done #4 DONE 0.0s #5 [internal] load build context #5 transferring context: 91B done #5 DONE 0.0s #6 [2/7] RUN apk add --no-cache python3 py3-pip #6 CACHED #7 [3/7] RUN python3 -m venv /venv #7 CACHED #8 [4/7] RUN . /venv/bin/activate #8 CACHED #9 [5/7] COPY . . #9 CACHED #10 [6/7] RUN pip install -r ./requirements.txt #10 0.587 error: externally-managed-environment #10 0.587 #10 0.587 × This environment is externally managed #10 0.587 ╰─&gt; #10 0.587 The system-wide python installation should be maintained using the system #10 0.587 package manager (apk) only. #10 0.587 #10 0.587 If the package in question is not packaged already (and hence installable via #10 0.587 "apk add py3-somepackage"), please consider installing it inside a virtual #10 0.587 environment, e.g.: #10 0.587 #10 0.587 python3 -m venv /path/to/venv #10 0.587 . /path/to/venv/bin/activate #10 0.587 pip install mypackage #10 0.587 #10 0.587 To exit the virtual environment, run: #10 0.587 #10 0.587 deactivate #10 0.587 #10 0.587 The virtual environment is not deleted, and can be re-entered by re-sourcing #10 0.587 the activate file. #10 0.587 #10 0.587 To automatically manage virtual environments, consider using pipx (from the #10 0.587 pipx package). #10 0.587 #10 0.587 note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. #10 0.587 hint: See PEP 668 for the detailed specification. #10 ERROR: process "/bin/sh -c pip install -r ./requirements.txt" did not complete successfully: exit code: 1 ------ &gt; [6/7] RUN pip install -r ./requirements.txt: 0.587 deactivate 0.587 0.587 The virtual environment is not deleted, and can be re-entered by re-sourcing 0.587 the activate file. 0.587 0.587 To automatically manage virtual environments, consider using pipx (from the 0.587 pipx package). 0.587 0.587 note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. 0.587 hint: See PEP 668 for the detailed specification. ------ Dockerfile:13 -------------------- 11 | COPY . . 12 | 13 | &gt;&gt;&gt; RUN pip install -r ./requirements.txt 14 | RUN python main.py 15 | -------------------- The system Python environment is triggered when importing dependencies from requirements.txt, not the created /venv/! Method 2: Ugly You can directly call /venv/’s Python and pip. It’s a better solution, but it still has caveats. First, you need to do tedious work adding the prefix path. Secondly, when Python triggers a subprocess, it won’t have access to /venv/ dependencies. So, can we do better? FROM alpine:latest RUN apk add --no-cache python3 py3-pip # Create and use a virtual environment RUN python3 -m venv /venv COPY . . RUN /venv/bin/pip install -r ./requirements.txt RUN /venv/bin/python main.py Method 3: Good Yes, we can. The good solution is to add /venv/bin to the system PATH environment variable. FROM alpine:latest RUN apk add --no-cache python3 py3-pip # Create and use a virtual environment RUN python3 -m venv /venv ENV PATH="/venv/bin:$PATH" COPY . . RUN pip install -r ./requirements.txt RUN python main.py As a result, we achieve an elegant solution! The virtualenv documentation even states that activating the environment is “purely a convenience.” If you read the code for activate, it does several things: Figures out what shell you’re running. Adds a deactivate function to your shell, which can interfere with pydoc. Changes the shell prompt to include the virtualenv name. Unsets the PYTHONHOME environment variable, if it was set. Sets two environment variables: VIRTUAL_ENV and PATH. From Docker’s perspective, points 1-4 are irrelevant. We only need to take care of VIRTUAL_ENV and PATH, which we can define manually. Wrap-up I hope what I wrote is useful :). Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on: Twitter: https://twitter.com/MichalMzr LinkedIn: https://www.linkedin.com/in/michmzr/ You can also find my posts on my second blog Geekowojażer.pl]]></summary></entry><entry xml:lang="en"><title type="html">How to use Kaniko and JFrog with GitHub Actions</title><link href="https://www.cybershu.eu/articles/kaniko-jfrog.html" rel="alternate" type="text/html" title="How to use Kaniko and JFrog with GitHub Actions" /><published>2024-01-16T00:00:00+01:00</published><updated>2024-01-16T00:00:00+01:00</updated><id>https://www.cybershu.eu/articles/kaniko-jfrog-github-actions</id><content type="html" xml:base="https://www.cybershu.eu/articles/kaniko-jfrog.html"><![CDATA[<h2 id="introduction">Introduction</h2>

<p>In this article, we will explore the use of Kaniko and JFrog with GitHub Actions to build and push a Docker image to JFrog Artifactory. This solution addressed issues I encountered while working on a project in my company. I hope it proves useful for you as well.</p>
<h2 id="prerequisites">Prerequisites</h2>

<p>You must have:</p>
<ul>
  <li>GitHub project with Dockerfile</li>
  <li>JFrog Artifactory account with created docker repository</li>
</ul>

<h2 id="assumptions">Assumptions</h2>
<p>Your artifactory repository is called <code class="language-plaintext highlighter-rouge">docker-local</code> and you have created a user with username <code class="language-plaintext highlighter-rouge">docker</code> and password(api token) <code class="language-plaintext highlighter-rouge">docker-password</code>.
Full path is <code class="language-plaintext highlighter-rouge">https://artifactory.example.com/artifactory/docker-images</code> and you want to build and push image with name <code class="language-plaintext highlighter-rouge">my-image:latest</code>.</p>

<h2 id="configuration">Configuration</h2>

<p>I use <a href="https://github.com/aevea/action-kaniko">kaniko</a> action to build and push image to JFrog Artifactory.</p>

<p><strong>Example Github action job file:</strong></p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="na">name</span><span class="pi">:</span> <span class="s">Build and push Docker image to JFrog Artifactory</span>
  <span class="na">jobs</span><span class="pi">:</span>
  <span class="na">container-test-job</span><span class="pi">:</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">ubuntu-latest</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Kaniko build</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">aevea/action-kaniko@master</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">image</span><span class="pi">:</span> <span class="s">docker-images/my-image</span>
          <span class="na">registry</span><span class="pi">:</span> <span class="s">artifactory.example.com</span>
          <span class="na">username</span><span class="pi">:</span> <span class="s2">"</span><span class="s">docker"</span>
          <span class="na">password</span><span class="pi">:</span> <span class="s2">"</span><span class="s">docker-password"</span>
          <span class="na">tag</span><span class="pi">:</span> <span class="s">latest</span>
          <span class="na">path</span><span class="pi">:</span> <span class="s">Dockerfile</span>
      <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">docker images ls</span>
</code></pre></div></div>

<p>you can alternatively use docker build command:</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="na">name</span><span class="pi">:</span> <span class="s">Build and push Docker image to JFrog Artifactory</span>
  <span class="na">jobs</span><span class="pi">:</span>
  <span class="na">container-test-job</span><span class="pi">:</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">ubuntu-latest</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">docker build</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">docker/build-push-action@v5</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">push</span><span class="pi">:</span> <span class="no">false</span>
          <span class="na">tags</span><span class="pi">:</span> <span class="s">my-image:latest</span>
          <span class="na">file</span><span class="pi">:</span> <span class="s">Dockerfile</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Login to DevHub Docker Hub</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">docker/login-action@v3</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">registry</span><span class="pi">:</span> <span class="s2">"</span><span class="s">artifactory.example.com"</span>
          <span class="na">username</span><span class="pi">:</span> <span class="s2">"</span><span class="s">docker"</span>
          <span class="na">password</span><span class="pi">:</span> <span class="s2">"</span><span class="s">docker-password"</span>
      <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">docker image ls</span>
      <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">docker tag my-image:latest artifactory.example.com/docker-images/my-image:latest</span>
      <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">docker push artifactory.example.com/docker-images/my-image:latest</span>
</code></pre></div></div>

<h2 id="contact">Contact</h2>
<p>I hope what I wrote is useful :). Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on:</p>
<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>You can also find my posts on my second blog <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="kaniko" /><category term="jfrog" /><category term="github" /><category term="actions" /><category term="artifactory" /><category term="devops" /><category term="docker" /><category term="ci" /><category term="cd" /><category term="github-actions" /><summary type="html"><![CDATA[Introduction In this article, we will explore the use of Kaniko and JFrog with GitHub Actions to build and push a Docker image to JFrog Artifactory. This solution addressed issues I encountered while working on a project in my company. I hope it proves useful for you as well. Prerequisites You must have: GitHub project with Dockerfile JFrog Artifactory account with created docker repository Assumptions Your artifactory repository is called docker-local and you have created a user with username docker and password(api token) docker-password. Full path is https://artifactory.example.com/artifactory/docker-images and you want to build and push image with name my-image:latest. Configuration I use kaniko action to build and push image to JFrog Artifactory. Example Github action job file: name: Build and push Docker image to JFrog Artifactory jobs: container-test-job: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Kaniko build uses: aevea/action-kaniko@master with: image: docker-images/my-image registry: artifactory.example.com username: "docker" password: "docker-password" tag: latest path: Dockerfile - run: docker images ls you can alternatively use docker build command: name: Build and push Docker image to JFrog Artifactory jobs: container-test-job: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: docker build uses: docker/build-push-action@v5 with: push: false tags: my-image:latest file: Dockerfile - name: Login to DevHub Docker Hub uses: docker/login-action@v3 with: registry: "artifactory.example.com" username: "docker" password: "docker-password" - run: docker image ls - run: docker tag my-image:latest artifactory.example.com/docker-images/my-image:latest - run: docker push artifactory.example.com/docker-images/my-image:latest Contact I hope what I wrote is useful :). Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on: Twitter: https://twitter.com/MichalMzr LinkedIn: https://www.linkedin.com/in/michmzr/ You can also find my posts on my second blog Geekowojażer.pl]]></summary></entry><entry xml:lang="en"><title type="html">How to setup N8n.io on your server</title><link href="https://www.cybershu.eu/articles/n8n-setup.html" rel="alternate" type="text/html" title="How to setup N8n.io on your server" /><published>2023-12-26T00:00:00+01:00</published><updated>2023-12-26T00:00:00+01:00</updated><id>https://www.cybershu.eu/articles/n8n-setup</id><content type="html" xml:base="https://www.cybershu.eu/articles/n8n-setup.html"><![CDATA[<p>Hello ✋,</p>

<p>A few months ago, I came across an interesting tool for process automation. In its functionalities, it is very similar to Zapier, IFTT, or make.com. It’s called <a href="https://n8n.io/">N8N</a>.</p>

<p>I base my automations, such as an “AI” assistant or managing Todoist, which I use for task and project management, on n8n. Some processes are still kept on <a href="https://www.make.com/en">make.com</a>. What sets it apart from make.com is its free self-hosted version. It means that if you have your server, you can install and use it for free. Make can become very costly after exceeding a certain number of operations per month, and there is also a privacy aspect because the data is kept by them.</p>

<p>In the article, I’m going to show how to use docker compose to set up the application for yourself. It will require basic skills in Linux administration and Docker. You can also use any other platform where Docker images will work or a virtual machine.</p>

<p><strong>Screenshot one of my automations</strong>
<img src="../assets/images/posts/n8n-setup/n8n_scenario_screnshot.png" alt="" /></p>

<h2 id="1-install-docker">1. Install docker</h2>
<p>Follow below links:
<strong>Ubuntu:</strong> https://docs.docker.com/engine/install/ubuntu/
<strong>Debian:</strong> https://docs.docker.com/engine/install/debian/
<strong>other platforms:</strong> https://docs.docker.com/engine/install/</p>

<h2 id="2-create-docker-composeyml-file">2. Create docker-compose.yml file</h2>
<p>Create docker-compose.yml file in one of directories.
I file in my case is located in <code class="language-plaintext highlighter-rouge">/var/lib/n8n/docker-compose.yml</code></p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">version</span><span class="pi">:</span> <span class="s1">'</span><span class="s">3.1'</span>

<span class="na">services</span><span class="pi">:</span>
  <span class="na">n8n</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">docker.n8n.io/n8nio/n8n</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s">unless-stopped</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">n8n</span>
    <span class="na">ports</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s2">"</span><span class="s">5678:5678"</span>
    <span class="na">environment</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">N8N_BASIC_AUTH_ACTIVE=true</span>
      <span class="pi">-</span> <span class="s">N8N_BASIC_AUTH_USER=foo-user</span>
      <span class="pi">-</span> <span class="s">N8N_BASIC_AUTH_PASSWORD=very-long-password</span>
      <span class="pi">-</span> <span class="s">N8N_HOST=[host IP or domain]</span>
      <span class="pi">-</span> <span class="s">N8N_PORT=5678</span>
      <span class="pi">-</span> <span class="s">WEBHOOK_URL=[your webhook url]</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">/var/run/docker.sock:/var/run/docker.sock</span>
      <span class="pi">-</span> <span class="s">./n8n:/home/node/.n8n</span>
</code></pre></div></div>

<p><strong>where</strong>:
<code class="language-plaintext highlighter-rouge">foo-user</code> - is your username <br />
<code class="language-plaintext highlighter-rouge">very-long-password</code> - is your password<br />
<code class="language-plaintext highlighter-rouge">[host IP or domain]</code> - is your server IP i.e <code class="language-plaintext highlighter-rouge">127.0.0.1</code> or domain <br />
<code class="language-plaintext highlighter-rouge">[your webhook url]</code> - is your webhook url i.e https://n8n.domain.com <br /></p>

<h2 id="3-configure-reverse-proxy-nginx">3. Configure reverse proxy (nginx)</h2>
<p>N8n config file path: <code class="language-plaintext highlighter-rouge">/etc/nginx/sites-available/n8n.conf</code></p>

<div class="language-nginx highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">server</span> <span class="p">{</span>
    <span class="kn">server_name</span> <span class="s">[server</span> <span class="s">name]</span><span class="p">;</span>
    <span class="kn">location</span> <span class="n">/</span> <span class="p">{</span>
        <span class="kn">proxy_pass</span> <span class="s">[server</span> <span class="s">ip</span> <span class="s">with</span> <span class="s">port</span> <span class="mi">5678</span><span class="s">]</span><span class="p">;</span>
        <span class="kn">proxy_http_version</span> <span class="mi">1</span><span class="s">.1</span><span class="p">;</span>

        <span class="c1"># WebSocket support</span>
        <span class="kn">proxy_set_header</span> <span class="s">Upgrade</span> <span class="nv">$http_upgrade</span><span class="p">;</span>
        <span class="kn">proxy_set_header</span> <span class="s">Connection</span> <span class="s">"upgrade"</span><span class="p">;</span>
        <span class="kn">proxy_read_timeout</span> <span class="mi">86400</span><span class="p">;</span> <span class="c1"># This can be set higher, necessary for WebSocket</span>

        <span class="kn">proxy_set_header</span> <span class="s">Host</span> <span class="nv">$host</span><span class="p">;</span>
        <span class="kn">proxy_set_header</span> <span class="s">X-Real-IP</span> <span class="nv">$remote_addr</span><span class="p">;</span>
        <span class="kn">proxy_set_header</span> <span class="s">X-Forwarded-For</span> <span class="nv">$proxy_add_x_forwarded_for</span><span class="p">;</span>
        <span class="kn">proxy_set_header</span> <span class="s">X-Forwarded-Proto</span> <span class="nv">$scheme</span><span class="p">;</span>

        <span class="c1"># Other settings</span>
        <span class="kn">chunked_transfer_encoding</span> <span class="no">off</span><span class="p">;</span>
        <span class="kn">proxy_buffering</span> <span class="no">off</span><span class="p">;</span>
        <span class="kn">proxy_cache</span> <span class="no">off</span><span class="p">;</span>
    <span class="p">}</span>


    <span class="kn">listen</span> <span class="mi">443</span> <span class="s">ssl</span><span class="p">;</span> <span class="c1"># managed by Certbot</span>
    <span class="kn">ssl_certificate</span> <span class="n">/etc/letsencrypt/live/n8n.cybershu.eu/fullchain.pem</span><span class="p">;</span> <span class="c1"># managed by Certbot</span>
    <span class="kn">ssl_certificate_key</span> <span class="n">/etc/letsencrypt/live/n8n.cybershu.eu/privkey.pem</span><span class="p">;</span> <span class="c1"># managed by Certbot</span>
    <span class="kn">include</span> <span class="n">/etc/letsencrypt/options-ssl-nginx.conf</span><span class="p">;</span> <span class="c1"># managed by Certbot</span>
    <span class="kn">ssl_dhparam</span> <span class="n">/etc/letsencrypt/ssl-dhparams.pem</span><span class="p">;</span> <span class="c1"># managed by Certbot</span>

<span class="p">}</span><span class="k">server</span> <span class="p">{</span>
    <span class="kn">if</span> <span class="s">(</span><span class="nv">$host</span> <span class="p">=</span> <span class="s">n8n.cybershu.eu)</span> <span class="p">{</span>
        <span class="kn">return</span> <span class="mi">301</span> <span class="s">https://</span><span class="nv">$host$request_uri</span><span class="p">;</span>
    <span class="p">}</span> <span class="c1"># managed by Certbot</span>
</code></pre></div></div>

<p>I use certbot to generate and manage ssl certificates. You can find more information here: https://certbot.eff.org/instructions</p>

<h2 id="run-docker-compose">Run docker-compose</h2>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-compose up <span class="nt">-d</span>
</code></pre></div></div>

<h2 id="enable-n8n-proxy-service">Enable n8n proxy service</h2>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd </span>sites-enabled
<span class="nb">sudo ln</span> <span class="nt">-s</span> ../sites-available/n8n.conf <span class="nb">.</span>
<span class="nb">ls</span> <span class="nt">-l</span>
</code></pre></div></div>

<p>and restart nginx</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl restart nginx
</code></pre></div></div>

<h2 id="login-to-your-n8n-instance">Login to your n8n instance</h2>
<p>Go to https://n8n.domain.com and login with your credentials.</p>

<h2 id="useful-links">Useful links</h2>
<ul>
  <li><a href="https://docs.n8n.io/hosting/">https://docs.n8n.io/hosting/</a></li>
  <li><a href="https://www.cyberciti.biz/faq/nginx-restart-ubuntu-linux-command/">https://www.cyberciti.biz/faq/nginx-restart-ubuntu-linux-command/</a></li>
  <li><a href="https://community.n8n.io/t/websockets-and-sse-connection-lost/27137/6">https://community.n8n.io/t/websockets-and-sse-connection-lost/27137/6</a></li>
</ul>

<h2 id="wrap-up">Wrap-up</h2>
<p>I hope the article at least intrigued you, you learned something new. I</p>

<h2 id="contact">Contact</h2>
<p>Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on:</p>
<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>You can also find my posts on my second blog <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="n8n" /><category term="automation" /><category term="make.com" /><category term="process" /><category term="docker" /><category term="vps" /><category term="linux" /><category term="nginx" /><category term="reverse-proxy" /><category term="docker-compose" /><summary type="html"><![CDATA[Hello ✋, A few months ago, I came across an interesting tool for process automation. In its functionalities, it is very similar to Zapier, IFTT, or make.com. It’s called N8N. I base my automations, such as an “AI” assistant or managing Todoist, which I use for task and project management, on n8n. Some processes are still kept on make.com. What sets it apart from make.com is its free self-hosted version. It means that if you have your server, you can install and use it for free. Make can become very costly after exceeding a certain number of operations per month, and there is also a privacy aspect because the data is kept by them. In the article, I’m going to show how to use docker compose to set up the application for yourself. It will require basic skills in Linux administration and Docker. You can also use any other platform where Docker images will work or a virtual machine. Screenshot one of my automations 1. Install docker Follow below links: Ubuntu: https://docs.docker.com/engine/install/ubuntu/ Debian: https://docs.docker.com/engine/install/debian/ other platforms: https://docs.docker.com/engine/install/ 2. Create docker-compose.yml file Create docker-compose.yml file in one of directories. I file in my case is located in /var/lib/n8n/docker-compose.yml version: '3.1' services: n8n: image: docker.n8n.io/n8nio/n8n restart: unless-stopped container_name: n8n ports: - "5678:5678" environment: - N8N_BASIC_AUTH_ACTIVE=true - N8N_BASIC_AUTH_USER=foo-user - N8N_BASIC_AUTH_PASSWORD=very-long-password - N8N_HOST=[host IP or domain] - N8N_PORT=5678 - WEBHOOK_URL=[your webhook url] volumes: - /var/run/docker.sock:/var/run/docker.sock - ./n8n:/home/node/.n8n where: foo-user - is your username very-long-password - is your password [host IP or domain] - is your server IP i.e 127.0.0.1 or domain [your webhook url] - is your webhook url i.e https://n8n.domain.com 3. Configure reverse proxy (nginx) N8n config file path: /etc/nginx/sites-available/n8n.conf server { server_name [server name]; location / { proxy_pass [server ip with port 5678]; proxy_http_version 1.1; # WebSocket support proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 86400; # This can be set higher, necessary for WebSocket proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Other settings chunked_transfer_encoding off; proxy_buffering off; proxy_cache off; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/n8n.cybershu.eu/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/n8n.cybershu.eu/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot }server { if ($host = n8n.cybershu.eu) { return 301 https://$host$request_uri; } # managed by Certbot I use certbot to generate and manage ssl certificates. You can find more information here: https://certbot.eff.org/instructions Run docker-compose docker-compose up -d Enable n8n proxy service cd sites-enabled sudo ln -s ../sites-available/n8n.conf . ls -l and restart nginx sudo systemctl restart nginx Login to your n8n instance Go to https://n8n.domain.com and login with your credentials. Useful links https://docs.n8n.io/hosting/ https://www.cyberciti.biz/faq/nginx-restart-ubuntu-linux-command/ https://community.n8n.io/t/websockets-and-sse-connection-lost/27137/6 Wrap-up I hope the article at least intrigued you, you learned something new. I Contact Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on: Twitter: https://twitter.com/MichalMzr LinkedIn: https://www.linkedin.com/in/michmzr/ You can also find my posts on my second blog Geekowojażer.pl]]></summary></entry><entry xml:lang="en"><title type="html">[Arch Kata] Lets’s design hotdog POS system</title><link href="https://www.cybershu.eu/articles/arch-kata-hotdog-pos-system.html" rel="alternate" type="text/html" title="[Arch Kata] Lets’s design hotdog POS system" /><published>2023-11-25T00:00:00+01:00</published><updated>2023-11-25T00:00:00+01:00</updated><id>https://www.cybershu.eu/articles/arch-kata-hotdog</id><content type="html" xml:base="https://www.cybershu.eu/articles/arch-kata-hotdog-pos-system.html"><![CDATA[<p>Hello ✋,</p>

<p>I’m starting a new challenge on the blog. I’m reading a great book <a href="https://www.amazon.com/System-Design-Interview-insiders-Second/dp/B08CMF2CQF">“System design interview”</a> and I came up with the idea for a series of posts dedicated to system design. A necessary skill for seniors, especially those applying for more important decisions.</p>

<p>I found an interesting page https://www.architecturalkatas.com/kata.html, where you can draw a system that we will design.</p>

<p>I drew:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Hot Diggety Dog!

Local hot dog stand merchant wants a point-of-sale system for his hot dog stand operators

Requirements: must be lightweight in size--laptop is too unwieldy to use efficiently when making hot dogs on the street; allow for discounts; track sales by time and location; send inventory updates to mobile inventory-management staff (who drive to the location with supplies); provide a social-media integration so customers can be notified when a hot dog stand is nearby

Users: fifty or so hot dog stand operators, thousands of customers in the local area (via social-media)
</code></pre></div></div>

<p>I decided to use Chat-GPT to be my client, who by asking about business requirements will design the details of the system.</p>

<p>I used the following prompt to support the design process:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>You are to assume the role of a system design interviewer for a prominent IT company, assessing a candidate's expertise for the role of system architect. Maintain a friendly demeanor while being thorough and detail-oriented. Aim for succinct answers and focus on critical system aspects to gauge the candidate's system design proficiency. Correct any factual or logical errors in the candidate's questions or answers, providing realistic figures when required. Engage in active conversation, asking for specific details and adopting the candidate's language. Answer in user language

Scenario: A local hot dog stand merchant needs a point-of-sale system for their operators.

Requirements:
1. Lightweight and portable, as laptops are too unwieldy for street use
2. Allows for discounts
3. Tracks sales by time and location
4. Sends inventory updates to mobile inventory-management staff
5. Integrates with social media to notify customers of nearby hot dog stands

Users: Around 50 hot dog stand operators and thousands of local customers (via social media)

Your goal is to test the user's system design knowledge and skills.
</code></pre></div></div>
<p><strong>Temperature:</strong> <code class="language-plaintext highlighter-rouge">0.2</code>
<strong>Model:</strong> <code class="language-plaintext highlighter-rouge">GPT-4</code></p>

<h2 id="questions">Questions</h2>

<p><strong>What payment systems should the application support?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The application should process both card and cash payments.
</code></pre></div></div>

<p><strong>What discount management features should the system include? Does this include rules for individual customers, specials, discount codes? Can the point add its own promotions?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Yes, the system should allow for managing different types of discounts. Here are some features that might be useful:

1. Ability to create discount codes that customers can enter during payment.
2. Ability to set time-based discounts, for example, on certain days of the week or hours.
3. Ability to set discounts on specific products.
4. Ability to set discount rules for individual customers, for example, based on the frequency of their purchases.

Yes, point of sale operators should be able to add their own promotions.

</code></pre></div></div>

<p><strong>How can a user place an order online?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Online orders can be made through a dedicated mobile app. The user should be able to browse the menu, choose products, add them to the cart, enter discount codes, and make payments. After placing an order, the user should receive an order confirmation and an estimated waiting time.
</code></pre></div></div>

<p><strong>How many customers and orders can we expect in a day? How many orders can there be in peak hours?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The number of customers and orders can vary significantly depending on the location, day of the week, and time of day. For a small local business, such as a hot dog stand, we can assume that they can serve about 100-200 customers per day on average.

During peak hours, for example at lunchtime or during local events, this number can increase to 30 orders per hour from a single point.
</code></pre></div></div>

<p><strong>Should the operator be constantly connected to the internet?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Ideally, the operator should always have access to the Internet - this would allow for real-time data transmission, conveying information about sales and inventory status, thus contributing to efficient process management. However, such a solution is not always possible, especially if the operator works in places with poor network coverage.

Therefore, it would be beneficial if the system could also operate offline, and as soon as an Internet connection is available, it would automatically synchronize, sending the data stored on the device.
</code></pre></div></div>

<p><strong>When should sales and inventory reports be generated? What are the analytical needs? Do we need to process them in real time or is it enough to perform a batch at a certain time?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>This is an excellent question. Depending on business needs, reports can be generated and processed in real time or at specific intervals.

For daily operations, such as tracking sales and inventory status, batch processing every few hours or at the end of the day would probably suffice. This would allow for ongoing monitoring of the situation and decision-making based on the latest data.

On the other hand, for more strategic decisions, such as order planning or sales trend analysis, we might need reports generated in real time. This would allow us to quickly respond to market changes and optimize our operations.
</code></pre></div></div>

<p><strong>What exactly should we track and record about sales in real time in terms of time and location?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The system should track and record the following data:

1. Sales time: The exact date and time when the transaction was made. This will allow for the analysis of sales patterns and identification of peak sales periods.

2. Sales location: The exact location where the transaction was made. This will allow for the identification of the most profitable locations.

3. Transaction details: What products were sold, their quantity, the price of each product, whether any discounts were applied, etc.

This data should be available in real time for point-of-sale operators and management staff.
</code></pre></div></div>

<p><strong>Is there a preferred inventory management system for integration with the POS system?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Yes, the point should have access to the inventory management system.
</code></pre></div></div>

<p><strong>How would point operators use the inventory management system?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Operators could scan product barcodes using the built-in camera on their device to update inventory quantities after each sale. The system should also send notifications to operators when the inventory level reaches a critical point so they can place orders in time.

We could also integrate the system with a similar POS system to automatically update inventory based on sales.
</code></pre></div></div>

<p><strong>What if the operator didn’t have time to update the inventory status and immediately proceeded with order fulfillment? Operators usually work under difficult conditions, they have to improvise.</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>This is an important point to consider. In an ideal scenario, the POS system would be designed to automatically update the inventory status after each sales transaction. This would minimize the need for the operator to manually update the inventory status. However, I understand that in reality there may be situations that prevent direct and automatic updates.

In such a case, the system should be designed to allow operators to manually update inventory status at any time. It should be accessible on mobile devices, so that operators can easily update stock levels whenever they find the time. Additionally, we should anticipate a notification system that would remind operators of the need to update inventory status if the system notices that there has been no update for a certain period of time, especially if orders were being fulfilled despite shortages in inventory.

Features such as low stock level notifications, product demand forecasts, and order recommendations can also help operators manage inventory effectively despite challenging working conditions.

It is important that the system is intuitive and easy to use, even for people who do not have much experience with technology. Operators should also be trained in how to use the system so that they can make the most of its features.
</code></pre></div></div>

<p><strong>What are the expectations regarding system availability? Does the system need to be available 24/7, or are downtime periods for maintenance, etc., acceptable?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For a point of sale system, like the one we are discussing, high availability is expected. Hot dog stands may operate at different hours, and some may even operate 24/7. Therefore, it is important that the system is available whenever operators need it.
</code></pre></div></div>

<p><strong>Does the system need to be real time? What is the tolerance for delay when placing an order?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Yes, the point of sale system should be able to process transactions in near real time. When a customer places an order, they expect immediate confirmation that the order has been accepted and is being processed.

As for tolerance for delays, ideally, the system's response time should be as short as possible, usually below a few seconds. Longer delays can lead to customer frustration and can affect the performance of the stand operators.
</code></pre></div></div>

<p><strong>Is there a specific platform - smartphone, website, POS terminal for stand operators?</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For hot dog stand operators, the most practical solution would be to use a mobile application on smartphones or tablets. This way, operators could easily conduct transactions, track sales, and manage inventory directly from their stand.
</code></pre></div></div>
<p><strong>Are there any security or privacy requirements in mind, for example, in relation to storing customer and transaction data?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Yes, security and privacy are key aspects in any system that stores customer and transaction data. The system should comply with privacy regulations, such as GDPR in Europe.

In relation to transaction data, the system should comply with PCI DSS (Payment Card Industry Data Security Standard), which is a security standard for organizations that handle card payments.

Some of the requirements include:

1. Data encryption: All data, including personal data and transaction data, should be encrypted during transmission and storage.
2. Authorization and authentication: Only authorized individuals should have access to system data.
3. Regular security audits: The system should be regularly checked for potential security gaps.

**How often and when should the user receive notifications so that they are not annoying?**

This is a very important question. Too frequent notifications can be annoying for users and lead to them starting to ignore notifications or even stop following the stand on social media.

One approach could be to send notifications only when the user is within a certain radius of the stand and only during the hours when the stand is open. Users could also be given the option to customize notification settings, for example, allowing them to choose how often they want to receive notifications.

Additionally, it's worth remembering that not all notifications have to be location-related. For example, you can send notifications about special promotions, new products, etc.
</code></pre></div></div>

<p><strong>What kind of notifications about nearby points do users receive? Push notifications from the app, a message from a chatbot, push notifications from FB &amp; Twitter apps, a post on FB &amp; Twitter newsfeed?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For this particular scenario, push notifications from the app may prove to be the most effective. These notifications can be sent directly to users' smartphones when they are near a hot dog stand. These notifications can contain information about the location of the stand, available products, special promotions, etc. When designing and implementing this geolocation-based functionality, GDPR/RODO compliance should be kept in mind.

Additionally, customers can subscribe to our social media to receive geo-fence notifications about the nearest points or updates on location changes of the points.
</code></pre></div></div>

<p><strong>When does the customer pay for the order?</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The customer always pays in advance when placing the order. For orders placed through the app, they pay using a payment gateway.
</code></pre></div></div>

<h2 id="project-requirements">Project Requirements</h2>

<h3 id="-functional">🤖🔧 Functional</h3>

<ul>
  <li>the point operator’s application must work in both online and offline mode</li>
  <li>payments are made in cash, by credit card, or through a gateway</li>
  <li>the user can receive notifications about location changes of operators in a specific region e.g. district, city.</li>
  <li>the operator should be able to update the inventory status by manually adding products.</li>
  <li>the operator must accept the order upon receipt. They also receive information about the current inventory status and whether they can fulfill it.</li>
  <li>the user can view the nearest operator points in the app.</li>
  <li>both the company and the operator can define a wide range of promotions such as combined products e.g. hot-dog + cola - 20% cheaper, discounts, promotional codes.</li>
</ul>

<h3 id="-non-functional">📈 Non-functional</h3>

<ul>
  <li>We will not store customers’ credit card data. We will use payment terminals and payment processors like Stripe/PayU/PayPal</li>
  <li>scalability - the system should adapt to the changing number of orders throughout the day</li>
  <li>low latency - the customer expects quick confirmation and execution of the order</li>
  <li>persistence - every change in order status should be recorded in the database, especially important in the case of payments.</li>
  <li>reliability - the system should be available and reliable. The operator should be able to use the system, even if they do not have internet access.</li>
  <li>secure - customer, operator, and transaction data must be protected, the system should comply with GDPR/RODO standards.</li>
</ul>

<h2 id="-back-of-the-envelope-estimations">🧮 Back of the envelope estimations</h2>

<ul>
  <li>50 service points</li>
  <li>an average of 100 customers from each point, which comes out to 5000 transactions per day (3.5 transactions per minute).</li>
  <li>During peak hours we have about <code class="language-plaintext highlighter-rouge">50*40=2000</code> orders per hour (0.5 transactions per second).</li>
</ul>

<p>According to the calculations, we are dealing with a small system. I would design it in a modular monolith architecture to reduce production and maintenance costs. If the business scaled up significantly, to hundreds of transactions per second, microservices could be separated for key modules of the critical path.</p>

<h2 id="️-proposed-architecture">🏗️ Proposed architecture</h2>

<p><strong>System components</strong>:</p>
<ul>
  <li><strong>API gateway:</strong> - it is a window to the backend world, intermediates in connections between mobile/web applications and the rest of the system. It performs several functions: connection authentication, provides connection metrics, aggregates APIs from all system services.</li>
  <li><strong>Inventory</strong> - responsible for storing and updating the inventory status of data points. Operators update the inventory status using requests.</li>
  <li><strong>Notifications system</strong> - based on the entered locations of current and future operators, geolocation notifications for customers are sent via social media, entries are added to social media with updated locations e.g. “today in AK park from 9 am a HotDog is waiting for you. Only today from 11:15 to 13 a 30% discount on the set with the code ‘AK_HOT’ when purchasing through the application”</li>
  <li><strong>CRM</strong> - manages user data: registration, change of personal data,</li>
  <li><strong>Transactions</strong> - updates the transaction status based on online payments, card or cash. The transaction must be completed correctly for the order to be directed to the chef for preparation.</li>
  <li><strong>Orders</strong> - orchestrates the order, checks the inventory status whether the order can be executed. The order placed through the mobile application must be accepted by the operator.</li>
  <li><strong>Operators</strong> - manages current operator data e.g. location, opening hours, menu.</li>
</ul>

<p><strong>External integrations:</strong></p>
<ul>
  <li><strong>Social media</strong> - social media platforms like FB, X</li>
  <li><strong>Payment processor:</strong> - payment processing system like Paypal, PayU, GooglePay. It’s a generic name for a few integrations. One processor should be used to handle many cashless payment methods.</li>
  <li><strong>SMS provider:</strong> - SMS provider, a system that can send short messages to operators and customers. It should be able to receive SMS-es from providers and forward their answers to the backend. It’s an important part of offline mode.</li>
  <li><strong>Email provider:</strong> - an external system for sending emails to operators and customers. It can be used for transactional and marketing emails. I decided to make it an external integration to ensure reliable email deliverability.</li>
</ul>

<h2 id="connecting-the-operator-and-customer-applications-with-the-backend">Connecting the operator and customer applications with the backend</h2>
<p>For communication of the operator and customer applications with the backend, I would use WebSockets. Information about new orders, changes in order statuses or operations would be handled in this way. This would give a great benefit in the form of information about the current state of the mobile connection at the seller.</p>

<p>Part of the client application queries may look like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The client has placed an order. Sends one request per second to check if the order is ready.
0 second: Is the food ready? (Client)
0 second: No, please wait. (Server)
1st second: Is the food ready? (Client)
1st second: No, please wait. (Server)
2nd second: Is the food ready? (Client)
2nd second: No, please wait. (Server)
3th second: Is the food ready? (Client)
3th second: Yes, here is your order. (Server)
</code></pre></div></div>

<p>For communication, you can also use HTTP pooling or HTTP long pooling, but this is a more resource-demanding solution e.g. websockets.</p>

<p>If the operator is online (heartbeat positive in the last few minutes) and has auto-confirmation enabled plus the inventory status is consistent (according to the inventory and the update was within e.g. 1h), the order is automatically confirmed.</p>

<p>If a customer has placed an order for e.g. a hot dog in a gluten-free bun and there is no such bun in stock, they should be offered a hot dog in a different bun. The system should use alternative products to ensure sales instead of abandoning the customer.</p>

<h3 id="offline-mode">Offline mode</h3>

<p>Offline mode is a very important functionality from the point of view of operators and customers. The operator should be able to continue operations in case of loss of stable internet access. In the meantime, he can make sales, handle orders from the application.</p>

<p>The backend should be “aware” of the connection status with the operator’s mobile application. This can be achieved by using a constant websocket channel with the mobile application. During the connection with the server, the channel is maintained by the application. The last connection status (timestamp, connection data, whether it’s LTE, GPRS, transfer speed) can be stored in the Redis database. If the last connection was, for example, 5 minutes ago, the operator is marked as offline and sales through client applications must operate differently.</p>

<h4 id="operators-offline-mode">Operator’s Offline Mode</h4>
<p>When designing the application, we assume that it must work in offline mode. The operator should be able to continue operations in case of loss of stable internet access. In the meantime, they can make sales, handle orders from the application. To achieve this, we need to save in memory the history of events such as new orders, goods deliveries, payments, etc. If the operator is offline, the application should save events in the device’s memory. When the operator returns to the network range, the application should synchronize events with the server. There are various approaches to designing offline mode. One of them is using a database for offline mode e.g. Firebase/Realm, PouchDB, which would automatically synchronize differences after regaining a stable connection. I decided not to recommend this solution due to the coupling of the mobile application with the database schema. Any change in the database e.g. adding new fields in the order table would require updating all user applications, which can take weeks. We lose agility and speed in delivering new features. I decided on a solution where the mobile application has something like an event store, to which events are saved. These events then need to be synchronized with the backend.</p>

<h4 id="offline-order-flow">Offline Order Flow</h4>
<p>My proposal for handling orders through the client application when the operator is offline, assuming that they are within the range of the telephone network and can receive/send SMS messages.</p>

<p>The customer orders online, if there was no recent connection via websocket with the operator, it sends them an SMS with a request for confirmation by sending a return SMS plus a QR code with order data (user id, order id, transaction id, order json) necessarily heavily compressed.</p>

<ol>
  <li>The customer places an order through the application/web.</li>
  <li>The order service checks whether the operator is online/offline, using the saved connection status in Redis.</li>
  <li>An SMS/MMS is sent to the operator with information about the order, e.g. order id, what is included in the order, who ordered, etc. The application, using permissions, can parse such an SMS and update the status in the application.</li>
  <li>The operator can call/send an SMS with confirmation/rejection of the order.</li>
  <li>Thanks to the use of the SMS provider’s API, the backend receives the operator’s response.</li>
  <li>The customer receives confirmation/rejection of the order. If the time for the seller’s response has passed, the customer should be informed about it. Additionally, the customer will not have up-to-date information about the order, so they may receive a message that they simply have to go to the sales point now if it can be realized.</li>
  <li>The customer shows the QR code of the order. It is read by the reader and is validated in the seller’s application.</li>
</ol>

<h2 id="c4-container-system-diagram">C4 Container system diagram</h2>
<p>You can find example C4 diagram of Hot Dog System below.
<img src="../assets/images/posts/archkata-hotdog/system_diagram.png" alt="" /></p>

<h2 id="wrap-up">Wrap-up</h2>
<p>I think I touched on the system design quite superficially. It could be further expanded with additional levels of the C4 diagram, or attempt event storming to separate business domains. In case the system was to be developed, it would be worth doing so.</p>

<p>I hope the article at least intrigued you, you learned something new. If you have any comments, let me know in the comment. If you want me to write about a specific topic, also let me know. Until next time!</p>

<h2 id="contact">Contact</h2>
<p>I hope what I wrote is useful :). Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on:</p>
<ul>
  <li>Twitter: <a href="https://twitter.com/MichalMzr">https://twitter.com/MichalMzr</a></li>
  <li>LinkedIn: <a href="https://www.linkedin.com/in/michmzr/">https://www.linkedin.com/in/michmzr/</a></li>
</ul>

<p>You can also find my posts on my second blog <a href="https://www.geekowojazer.pl/">Geekowojażer.pl</a></p>]]></content><author><name>Michał Mazur</name></author><category term="system" /><category term="design" /><category term="system-design" /><category term="architecture" /><category term="kata" /><category term="tech" /><category term="distributed-architecture" /><category term="case-study" /><category term="chat-gpt" /><category term="llm" /><category term="pair-programming" /><summary type="html"><![CDATA[Hello ✋, I’m starting a new challenge on the blog. I’m reading a great book “System design interview” and I came up with the idea for a series of posts dedicated to system design. A necessary skill for seniors, especially those applying for more important decisions. I found an interesting page https://www.architecturalkatas.com/kata.html, where you can draw a system that we will design. I drew: Hot Diggety Dog! Local hot dog stand merchant wants a point-of-sale system for his hot dog stand operators Requirements: must be lightweight in size--laptop is too unwieldy to use efficiently when making hot dogs on the street; allow for discounts; track sales by time and location; send inventory updates to mobile inventory-management staff (who drive to the location with supplies); provide a social-media integration so customers can be notified when a hot dog stand is nearby Users: fifty or so hot dog stand operators, thousands of customers in the local area (via social-media) I decided to use Chat-GPT to be my client, who by asking about business requirements will design the details of the system. I used the following prompt to support the design process: You are to assume the role of a system design interviewer for a prominent IT company, assessing a candidate's expertise for the role of system architect. Maintain a friendly demeanor while being thorough and detail-oriented. Aim for succinct answers and focus on critical system aspects to gauge the candidate's system design proficiency. Correct any factual or logical errors in the candidate's questions or answers, providing realistic figures when required. Engage in active conversation, asking for specific details and adopting the candidate's language. Answer in user language Scenario: A local hot dog stand merchant needs a point-of-sale system for their operators. Requirements: 1. Lightweight and portable, as laptops are too unwieldy for street use 2. Allows for discounts 3. Tracks sales by time and location 4. Sends inventory updates to mobile inventory-management staff 5. Integrates with social media to notify customers of nearby hot dog stands Users: Around 50 hot dog stand operators and thousands of local customers (via social media) Your goal is to test the user's system design knowledge and skills. Temperature: 0.2 Model: GPT-4 Questions What payment systems should the application support? The application should process both card and cash payments. What discount management features should the system include? Does this include rules for individual customers, specials, discount codes? Can the point add its own promotions? Yes, the system should allow for managing different types of discounts. Here are some features that might be useful: 1. Ability to create discount codes that customers can enter during payment. 2. Ability to set time-based discounts, for example, on certain days of the week or hours. 3. Ability to set discounts on specific products. 4. Ability to set discount rules for individual customers, for example, based on the frequency of their purchases. Yes, point of sale operators should be able to add their own promotions. How can a user place an order online? Online orders can be made through a dedicated mobile app. The user should be able to browse the menu, choose products, add them to the cart, enter discount codes, and make payments. After placing an order, the user should receive an order confirmation and an estimated waiting time. How many customers and orders can we expect in a day? How many orders can there be in peak hours? The number of customers and orders can vary significantly depending on the location, day of the week, and time of day. For a small local business, such as a hot dog stand, we can assume that they can serve about 100-200 customers per day on average. During peak hours, for example at lunchtime or during local events, this number can increase to 30 orders per hour from a single point. Should the operator be constantly connected to the internet? Ideally, the operator should always have access to the Internet - this would allow for real-time data transmission, conveying information about sales and inventory status, thus contributing to efficient process management. However, such a solution is not always possible, especially if the operator works in places with poor network coverage. Therefore, it would be beneficial if the system could also operate offline, and as soon as an Internet connection is available, it would automatically synchronize, sending the data stored on the device. When should sales and inventory reports be generated? What are the analytical needs? Do we need to process them in real time or is it enough to perform a batch at a certain time? This is an excellent question. Depending on business needs, reports can be generated and processed in real time or at specific intervals. For daily operations, such as tracking sales and inventory status, batch processing every few hours or at the end of the day would probably suffice. This would allow for ongoing monitoring of the situation and decision-making based on the latest data. On the other hand, for more strategic decisions, such as order planning or sales trend analysis, we might need reports generated in real time. This would allow us to quickly respond to market changes and optimize our operations. What exactly should we track and record about sales in real time in terms of time and location? The system should track and record the following data: 1. Sales time: The exact date and time when the transaction was made. This will allow for the analysis of sales patterns and identification of peak sales periods. 2. Sales location: The exact location where the transaction was made. This will allow for the identification of the most profitable locations. 3. Transaction details: What products were sold, their quantity, the price of each product, whether any discounts were applied, etc. This data should be available in real time for point-of-sale operators and management staff. Is there a preferred inventory management system for integration with the POS system? Yes, the point should have access to the inventory management system. How would point operators use the inventory management system? Operators could scan product barcodes using the built-in camera on their device to update inventory quantities after each sale. The system should also send notifications to operators when the inventory level reaches a critical point so they can place orders in time. We could also integrate the system with a similar POS system to automatically update inventory based on sales. What if the operator didn’t have time to update the inventory status and immediately proceeded with order fulfillment? Operators usually work under difficult conditions, they have to improvise. This is an important point to consider. In an ideal scenario, the POS system would be designed to automatically update the inventory status after each sales transaction. This would minimize the need for the operator to manually update the inventory status. However, I understand that in reality there may be situations that prevent direct and automatic updates. In such a case, the system should be designed to allow operators to manually update inventory status at any time. It should be accessible on mobile devices, so that operators can easily update stock levels whenever they find the time. Additionally, we should anticipate a notification system that would remind operators of the need to update inventory status if the system notices that there has been no update for a certain period of time, especially if orders were being fulfilled despite shortages in inventory. Features such as low stock level notifications, product demand forecasts, and order recommendations can also help operators manage inventory effectively despite challenging working conditions. It is important that the system is intuitive and easy to use, even for people who do not have much experience with technology. Operators should also be trained in how to use the system so that they can make the most of its features. What are the expectations regarding system availability? Does the system need to be available 24/7, or are downtime periods for maintenance, etc., acceptable? For a point of sale system, like the one we are discussing, high availability is expected. Hot dog stands may operate at different hours, and some may even operate 24/7. Therefore, it is important that the system is available whenever operators need it. Does the system need to be real time? What is the tolerance for delay when placing an order? Yes, the point of sale system should be able to process transactions in near real time. When a customer places an order, they expect immediate confirmation that the order has been accepted and is being processed. As for tolerance for delays, ideally, the system's response time should be as short as possible, usually below a few seconds. Longer delays can lead to customer frustration and can affect the performance of the stand operators. Is there a specific platform - smartphone, website, POS terminal for stand operators? For hot dog stand operators, the most practical solution would be to use a mobile application on smartphones or tablets. This way, operators could easily conduct transactions, track sales, and manage inventory directly from their stand. Are there any security or privacy requirements in mind, for example, in relation to storing customer and transaction data? Yes, security and privacy are key aspects in any system that stores customer and transaction data. The system should comply with privacy regulations, such as GDPR in Europe. In relation to transaction data, the system should comply with PCI DSS (Payment Card Industry Data Security Standard), which is a security standard for organizations that handle card payments. Some of the requirements include: 1. Data encryption: All data, including personal data and transaction data, should be encrypted during transmission and storage. 2. Authorization and authentication: Only authorized individuals should have access to system data. 3. Regular security audits: The system should be regularly checked for potential security gaps. **How often and when should the user receive notifications so that they are not annoying?** This is a very important question. Too frequent notifications can be annoying for users and lead to them starting to ignore notifications or even stop following the stand on social media. One approach could be to send notifications only when the user is within a certain radius of the stand and only during the hours when the stand is open. Users could also be given the option to customize notification settings, for example, allowing them to choose how often they want to receive notifications. Additionally, it's worth remembering that not all notifications have to be location-related. For example, you can send notifications about special promotions, new products, etc. What kind of notifications about nearby points do users receive? Push notifications from the app, a message from a chatbot, push notifications from FB &amp; Twitter apps, a post on FB &amp; Twitter newsfeed? For this particular scenario, push notifications from the app may prove to be the most effective. These notifications can be sent directly to users' smartphones when they are near a hot dog stand. These notifications can contain information about the location of the stand, available products, special promotions, etc. When designing and implementing this geolocation-based functionality, GDPR/RODO compliance should be kept in mind. Additionally, customers can subscribe to our social media to receive geo-fence notifications about the nearest points or updates on location changes of the points. When does the customer pay for the order? The customer always pays in advance when placing the order. For orders placed through the app, they pay using a payment gateway. Project Requirements 🤖🔧 Functional the point operator’s application must work in both online and offline mode payments are made in cash, by credit card, or through a gateway the user can receive notifications about location changes of operators in a specific region e.g. district, city. the operator should be able to update the inventory status by manually adding products. the operator must accept the order upon receipt. They also receive information about the current inventory status and whether they can fulfill it. the user can view the nearest operator points in the app. both the company and the operator can define a wide range of promotions such as combined products e.g. hot-dog + cola - 20% cheaper, discounts, promotional codes. 📈 Non-functional We will not store customers’ credit card data. We will use payment terminals and payment processors like Stripe/PayU/PayPal scalability - the system should adapt to the changing number of orders throughout the day low latency - the customer expects quick confirmation and execution of the order persistence - every change in order status should be recorded in the database, especially important in the case of payments. reliability - the system should be available and reliable. The operator should be able to use the system, even if they do not have internet access. secure - customer, operator, and transaction data must be protected, the system should comply with GDPR/RODO standards. 🧮 Back of the envelope estimations 50 service points an average of 100 customers from each point, which comes out to 5000 transactions per day (3.5 transactions per minute). During peak hours we have about 50*40=2000 orders per hour (0.5 transactions per second). According to the calculations, we are dealing with a small system. I would design it in a modular monolith architecture to reduce production and maintenance costs. If the business scaled up significantly, to hundreds of transactions per second, microservices could be separated for key modules of the critical path. 🏗️ Proposed architecture System components: API gateway: - it is a window to the backend world, intermediates in connections between mobile/web applications and the rest of the system. It performs several functions: connection authentication, provides connection metrics, aggregates APIs from all system services. Inventory - responsible for storing and updating the inventory status of data points. Operators update the inventory status using requests. Notifications system - based on the entered locations of current and future operators, geolocation notifications for customers are sent via social media, entries are added to social media with updated locations e.g. “today in AK park from 9 am a HotDog is waiting for you. Only today from 11:15 to 13 a 30% discount on the set with the code ‘AK_HOT’ when purchasing through the application” CRM - manages user data: registration, change of personal data, Transactions - updates the transaction status based on online payments, card or cash. The transaction must be completed correctly for the order to be directed to the chef for preparation. Orders - orchestrates the order, checks the inventory status whether the order can be executed. The order placed through the mobile application must be accepted by the operator. Operators - manages current operator data e.g. location, opening hours, menu. External integrations: Social media - social media platforms like FB, X Payment processor: - payment processing system like Paypal, PayU, GooglePay. It’s a generic name for a few integrations. One processor should be used to handle many cashless payment methods. SMS provider: - SMS provider, a system that can send short messages to operators and customers. It should be able to receive SMS-es from providers and forward their answers to the backend. It’s an important part of offline mode. Email provider: - an external system for sending emails to operators and customers. It can be used for transactional and marketing emails. I decided to make it an external integration to ensure reliable email deliverability. Connecting the operator and customer applications with the backend For communication of the operator and customer applications with the backend, I would use WebSockets. Information about new orders, changes in order statuses or operations would be handled in this way. This would give a great benefit in the form of information about the current state of the mobile connection at the seller. Part of the client application queries may look like this: The client has placed an order. Sends one request per second to check if the order is ready. 0 second: Is the food ready? (Client) 0 second: No, please wait. (Server) 1st second: Is the food ready? (Client) 1st second: No, please wait. (Server) 2nd second: Is the food ready? (Client) 2nd second: No, please wait. (Server) 3th second: Is the food ready? (Client) 3th second: Yes, here is your order. (Server) For communication, you can also use HTTP pooling or HTTP long pooling, but this is a more resource-demanding solution e.g. websockets. If the operator is online (heartbeat positive in the last few minutes) and has auto-confirmation enabled plus the inventory status is consistent (according to the inventory and the update was within e.g. 1h), the order is automatically confirmed. If a customer has placed an order for e.g. a hot dog in a gluten-free bun and there is no such bun in stock, they should be offered a hot dog in a different bun. The system should use alternative products to ensure sales instead of abandoning the customer. Offline mode Offline mode is a very important functionality from the point of view of operators and customers. The operator should be able to continue operations in case of loss of stable internet access. In the meantime, he can make sales, handle orders from the application. The backend should be “aware” of the connection status with the operator’s mobile application. This can be achieved by using a constant websocket channel with the mobile application. During the connection with the server, the channel is maintained by the application. The last connection status (timestamp, connection data, whether it’s LTE, GPRS, transfer speed) can be stored in the Redis database. If the last connection was, for example, 5 minutes ago, the operator is marked as offline and sales through client applications must operate differently. Operator’s Offline Mode When designing the application, we assume that it must work in offline mode. The operator should be able to continue operations in case of loss of stable internet access. In the meantime, they can make sales, handle orders from the application. To achieve this, we need to save in memory the history of events such as new orders, goods deliveries, payments, etc. If the operator is offline, the application should save events in the device’s memory. When the operator returns to the network range, the application should synchronize events with the server. There are various approaches to designing offline mode. One of them is using a database for offline mode e.g. Firebase/Realm, PouchDB, which would automatically synchronize differences after regaining a stable connection. I decided not to recommend this solution due to the coupling of the mobile application with the database schema. Any change in the database e.g. adding new fields in the order table would require updating all user applications, which can take weeks. We lose agility and speed in delivering new features. I decided on a solution where the mobile application has something like an event store, to which events are saved. These events then need to be synchronized with the backend. Offline Order Flow My proposal for handling orders through the client application when the operator is offline, assuming that they are within the range of the telephone network and can receive/send SMS messages. The customer orders online, if there was no recent connection via websocket with the operator, it sends them an SMS with a request for confirmation by sending a return SMS plus a QR code with order data (user id, order id, transaction id, order json) necessarily heavily compressed. The customer places an order through the application/web. The order service checks whether the operator is online/offline, using the saved connection status in Redis. An SMS/MMS is sent to the operator with information about the order, e.g. order id, what is included in the order, who ordered, etc. The application, using permissions, can parse such an SMS and update the status in the application. The operator can call/send an SMS with confirmation/rejection of the order. Thanks to the use of the SMS provider’s API, the backend receives the operator’s response. The customer receives confirmation/rejection of the order. If the time for the seller’s response has passed, the customer should be informed about it. Additionally, the customer will not have up-to-date information about the order, so they may receive a message that they simply have to go to the sales point now if it can be realized. The customer shows the QR code of the order. It is read by the reader and is validated in the seller’s application. C4 Container system diagram You can find example C4 diagram of Hot Dog System below. Wrap-up I think I touched on the system design quite superficially. It could be further expanded with additional levels of the C4 diagram, or attempt event storming to separate business domains. In case the system was to be developed, it would be worth doing so. I hope the article at least intrigued you, you learned something new. If you have any comments, let me know in the comment. If you want me to write about a specific topic, also let me know. Until next time! Contact I hope what I wrote is useful :). Please leave any comments to let me know. If you have any questions, please feel free to contact me directly on: Twitter: https://twitter.com/MichalMzr LinkedIn: https://www.linkedin.com/in/michmzr/ You can also find my posts on my second blog Geekowojażer.pl]]></summary></entry></feed>