<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ajith Joseph's Cloud Blog]]></title><description><![CDATA[Helping organizations build and manage scalable cloud ecosystems.]]></description><link>https://blog.ajosephlive.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 22:04:11 GMT</lastBuildDate><atom:link href="https://blog.ajosephlive.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Second Brain for the Terminal: Amazon Q CLI for Ops]]></title><description><![CDATA[As an Infrastructure Architect, I’ve spent more years than I can count with my hands on a keyboard, staring at a terminal window. Let's be honest, there's a certain pride in mastering the command line. Chaining together grep, awk, sed, and xargs to u...]]></description><link>https://blog.ajosephlive.com/second-brain-for-terminal-amazon-q-for-ops</link><guid isPermaLink="true">https://blog.ajosephlive.com/second-brain-for-terminal-amazon-q-for-ops</guid><category><![CDATA[Amazon Web Services]]></category><category><![CDATA[Amazon Q]]></category><dc:creator><![CDATA[Ajith Joseph]]></dc:creator><pubDate>Mon, 30 Jun 2025 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751907002879/2620bafb-fb85-4b17-91b1-5890bc6c5e33.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As an Infrastructure Architect, I’ve spent more years than I can count with my hands on a keyboard, staring at a terminal window. Let's be honest, there's a certain pride in mastering the command line. Chaining together <strong><em>grep</em></strong>, <strong><em>awk</em></strong>, <strong><em>sed</em></strong>, and <strong><em>xargs</em></strong> to unravel a complex problem feels like a superpower. We admins build our careers on that skill.</p>
<p>But I'll also be the first to admit it: the mental tax is real. Memorizing the subtle differences in flags between Linux distributions, recalling the exact syntax for a <strong><em>netstat</em></strong> command you only use twice a year, or building a complex diagnostic script from scratch under pressure — it’s tough. The real challenge isn't just remembering the commands; it's about composing them into a workflow to solve a problem <em>right now</em>.</p>
<p>That’s why I’ve started evaluating <strong>Amazon Q CLI</strong> as a potential companion tool for my systems and network administration teams. The pitch is simple: what if your terminal understood plain English and translated that into working shell commands or even fully functioning scripts?</p>
<p>Well, it can. Sometimes it gives you exactly what you need, and other times it offers a starting point that’s about 80% there. Either way, when you're knee-deep in troubleshooting and short on time, that’s a decent head start.</p>
<hr />
<h2 id="heading-translate-mode-the-everyday-troubleshooter"><strong>Translate Mode: The Everyday Troubleshooter</strong></h2>
<p>My first test? I wanted to see if Amazon Q CLI could handle some of the small-yet-annoying queries that usually require a few minutes of mental gymnastics or a quick Stack Overflow detour.</p>
<p>During a disk space emergency (you know the kind: /var at 99%, panic rising), I tried this:</p>
<pre><code class="lang-bash">q translate <span class="hljs-string">"find log files in /var/log that are greater than 100 MB and modified in the last 60 mins"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751906596769/5aa7deb0-a90d-4af1-b4ad-d6169b694c09.png" alt class="image--center mx-auto" /></p>
<p>It nailed it with a clean, readable find command. No man pages needed. No -mtime vs -mmin second-guessing.</p>
<p>Later, during a login audit, I threw this at it:</p>
<pre><code class="lang-bash">q translate <span class="hljs-string">"show failed or preauth ssh login attempts recorded in the journal during the last 180 mins"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751906635021/bd7bb39f-5750-4151-a229-7128900f374d.png" alt class="image--center mx-auto" /></p>
<p>Again, solid result. Parsed the right logs, scoped the time range — and importantly, didn't assume a specific distro. That’s one less thing to babysit.</p>
<p>It even handled a security check that I often include during instance validation:</p>
<pre><code class="lang-bash">q translate <span class="hljs-string">"list all open ports on this instance that are accessible from 0.0.0.0"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751906665750/db2422eb-9e46-4f33-bb7e-ce4533b47a96.png" alt class="image--center mx-auto" /></p>
<p>It gave me a reliable netstat pipeline that confirmed no unintended exposure. Clean and simple.</p>
<hr />
<h2 id="heading-need-more-firepower-enter-amazon-q-chat"><strong>Need More Firepower: Enter Amazon Q Chat</strong></h2>
<p>Once you start trying to automate entire workflows — beyond single-line commands — <strong>Amazon</strong> <strong>Q Chat</strong> becomes the better companion. It’s less of a translator, more of a collaborator. Ask it for a script, and it doesn’t just stop at one-liners. It builds structure, logic, and explains what it's doing.</p>
<p>First I opened up Amazon Q chat on the instance,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751906773988/ebd65ee5-6878-4497-b5a6-8d1b27446957.png" alt class="image--center mx-auto" /></p>
<p>Then asked:</p>
<pre><code class="lang-bash">“Generate a shell script that monitors CPU usage <span class="hljs-keyword">for</span> a service called myservice and logs anything over 85%.”
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751906795577/8b502b98-509c-4a4a-a119-c8e08afe975e.png" alt class="image--center mx-auto" /></p>
<p>Amazon Q returned a script that included ps, threshold logic, timestamps, and log formatting that made the output actually usable. Did I have to tweak it? Sure. But it felt like starting a race at the halfway mark.</p>
<p>Another day, I needed to dig into running processes and open ports related to NGINX. Instead of bouncing between <em>ps, lsof,</em> and a stack of bookmarks, I asked:</p>
<pre><code class="lang-bash">“Give me a script to list all processes <span class="hljs-keyword">for</span> nginx and the network ports they’re using.”
</code></pre>
<p>It handed me a usable loop with pgrep and lsof, complete with structured echo output and basic validation. Bonus: it commented the sections so even junior admins could follow it confidently.</p>
<p>I also gave it a bit of a stress test:</p>
<pre><code class="lang-bash">“Build me a script to check <span class="hljs-keyword">for</span> inode exhaustion, high disk usage, and mounts <span class="hljs-keyword">in</span> read-only state — and flag anything risky.”
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751906846392/a547df16-c675-4dfa-9dca-2c16824a6749.png" alt class="image--center mx-auto" /></p>
<p>While running the script, Amazon Q CLI did more than just list a few commands. It initially flagged some issues as critical—though these turned out to be false alarms—but then automatically corrected the script and provided a cleaner, updated version.</p>
<hr />
<h2 id="heading-beyond-bash-feeding-amazon-q-our-own-world-with-mcp"><strong>Beyond Bash: Feeding Amazon Q Our Own World with MCP</strong></h2>
<p>We’re also exploring how Amazon Q could become even smarter inside our environment using Model Context Protocol (MCP). By plugging in our own runbooks, internal tool references, and wikis, we could reach a point where Amazon Q doesn’t just say “<em>check the logs</em>” — it tells you which logs, what known issues to match against, and what our escalation policy is.</p>
<p>Imagine:</p>
<p><code>“What’s the fix for a stuck Kafka consumer on our staging cluster?”</code></p>
<p>And Amazon Q responds with our exact process — or even kicks off an automation run-book. That’s the direction we’re heading.</p>
<hr />
<h2 id="heading-installing-amazon-q-cli-on-the-instance-not-just-your-ide"><strong>Installing Amazon Q CLI <em>On</em> the Instance — Not Just Your IDE</strong></h2>
<p>Now here’s the part I’m still experimenting with — and maybe where this blog diverges from the usual.</p>
<p>Most documentation and demos show Amazon Q CLI being used from a developer’s machine, a laptop, or an IDE. But here’s the thing: IDEs can’t SSH into an instance mid-incident. And they can’t run <em>netstat</em> or read logs inside <em>/var/log</em> on an EC2 box.</p>
<p>That’s why I’m making a case for something a little unconventional: installing Amazon Q CLI <em>on</em> each EC2 instance — as a kind of assistant for system-level investigation.</p>
<p>Imagine: if you SSH into a problematic instance and Amazon Q CLI is already there, it can help accelerate the triage process. Whether it's a memory leak, a rogue process, or configuration drift, Amazon Q can give you the skeleton commands — and sometimes full scripts — without flipping through wikis or shell history.</p>
<p>This could be especially useful for:</p>
<ul>
<li><p>Instances running <em>COTS products</em> where logs and configs are scattered and vendor tooling is poor</p>
</li>
<li><p><em>Legacy app servers</em> with complicated service interdependencies and little documentation</p>
</li>
<li><p><em>Security patch verification</em>, where you need to confirm kernel versions or missing updates quickly</p>
</li>
</ul>
<p>The Amazon Q CLI doesn’t act autonomously — it won’t magically fix things. But when you’re in the terminal, troubleshooting a stubborn issue, it’s a powerful ally to have already installed and ready.</p>
<p>⚠️ <em>Important Note:</em> While installing Amazon Q CLI directly on EC2 instances can be valuable for rapid troubleshooting, it’s essential to evaluate your organization's security and compliance requirements before doing so. Production environments may have restrictions on outbound network traffic, IAM role access, or the installation of developer tools. Ensure that any such deployment aligns with your operational and security policies.</p>
<hr />
<h2 id="heading-try-it-out"><strong>Try It Out!</strong></h2>
<p>If you want to give this a spin yourself, install Amazon Q CLI:</p>
<pre><code class="lang-bash">curl --proto <span class="hljs-string">'=https'</span> --tlsv1.2 -sSf <span class="hljs-string">"https://desktop-release.q.us-east-1.amazonaws.com/latest/q-x86_64-linux.zip"</span> -o <span class="hljs-string">"q.zip"</span>
unzip q.zip
sudo ./install.sh
</code></pre>
<p>Then authenticate and open up some of those harder questions, like:</p>
<pre><code class="lang-bash">q translate <span class="hljs-string">"compare /etc/httpd/conf/httpd.conf with the baseline in /opt/configs/httpd.conf"</span>
</code></pre>
<p>Or dig deeper with Chat:</p>
<pre><code class="lang-bash">“Generate a script that checks CPU, memory, and open file descriptors <span class="hljs-keyword">for</span> all processes owned by the apache user.”
</code></pre>
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p>Amazon Q CLI isn’t about dumbing things down. But it’s something worth considering — especially when your team is fighting fires, or when you're trying to codify your troubleshooting instincts into something repeatable.</p>
<p>For admins managing fleets of EC2 instances running complex stacks — like a commercial app that spawns Java, Node.js, and background daemons — having Amazon Q CLI right there inside the instance might be the difference between 30 minutes of rabbit-hole debugging and a 5-minute fix.</p>
<p>If you’re someone who SSHs into servers often and spends half that time remembering arcane command syntax — give it a shot, it might give you a second brain at the terminal — one that knows bash, AWS, and maybe even your own internal rules.</p>
<hr />
<p><em>Disclaimer: Please note that AWS is constantly evolving, and new features may be available since the release of this blog post. It's recommended to review the latest documentation to determine the most suitable solutions for your specific needs. This blog is a reference guide only. Ensure that all solutions and tooling — including Amazon Q CLI — comply with your organization's security and compliance policies. Some services may still be evolving and may not yet meet all regulatory or industry-specific standards.</em></p>
]]></content:encoded></item><item><title><![CDATA[Streamlining AWS ROSA OpenShift operator deployment with OpenShift GitOps & Kustomize]]></title><description><![CDATA[Introduction
Deploying and managing applications in Kubernetes environments can be challenging, especially when dealing with complex configurations and multiple environments. Imagine a scenario where an operator needs to be deployed across developmen...]]></description><link>https://blog.ajosephlive.com/streamlining-aws-rosa-openshift-operator-deployment-with-openshift-gitops-kustomize</link><guid isPermaLink="true">https://blog.ajosephlive.com/streamlining-aws-rosa-openshift-operator-deployment-with-openshift-gitops-kustomize</guid><category><![CDATA[AWS]]></category><category><![CDATA[openshift]]></category><category><![CDATA[redhat]]></category><category><![CDATA[ArgoCD]]></category><dc:creator><![CDATA[Ajith Joseph]]></dc:creator><pubDate>Tue, 24 Sep 2024 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/3AGRHo54oHo/upload/f47a78da6336a623f91d3b4ca5fb86ca.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>Deploying and managing applications in Kubernetes environments can be challenging, especially when dealing with complex configurations and multiple environments. Imagine a scenario where an operator needs to be deployed across development, staging, and production environments, each requiring unique configurations and settings. Without a structured approach, handling this for each environment can quickly become error-prone and time-consuming.</p>
<p>In this blog, the focus will be on simplifying and streamlining <strong>AWS ROSA (RedHat OpenShift Service on AWS)</strong> OpenShift Operator deployments using <strong>Kustomize</strong> and <strong>RedHat</strong> <strong>OpenShift GitOps (ArgoCD)</strong>. By implementing GitOps methodologies, architects &amp; engineers can alleviate the burden of managing multiple configurations while ensuring that deployments are consistent and reproducible. With <strong>GitOps</strong>, Git repository becomes the single source of truth for all configuration changes, making deployments predictable and auditable.</p>
<p>The exploration will cover the benefits of Kustomize, highlighting its seamless integration with Kubernetes and OpenShift, including support for both oc and kubectl commands. Additionally, the advantages of using Argo CD for continuous deployment will be discussed, showcasing how it enhances the deployment process by providing automated synchronization with Git repositories.</p>
<h3 id="heading-why-kustomize">Why Kustomize?</h3>
<p><strong>Kustomize</strong> is a tool that enables configuration management through a "base and overlay" model, allowing to manage common and environment-specific configurations without duplicating YAML files.</p>
<ul>
<li><p><strong>Native Kubernetes Integration</strong>: Kustomize is built into both <code>kubectl</code> and <code>oc</code> (OpenShift CLI), offering a seamless experience.</p>
</li>
<li><p><strong>Base and Overlay Structure</strong>: Define a common configuration (base) and apply environment-specific customizations (overlays) using patches.</p>
</li>
<li><p><strong>Declarative Configurations</strong>: By adhering to declarative principles, Kustomize simplifies version control and collaboration.</p>
</li>
</ul>
<h3 id="heading-benefits-of-gitops-with-openshift-gitops-argocd">Benefits of GitOps with OpenShift GitOps (ArgoCD)</h3>
<p><strong>OpenShift GitOps</strong> (based on <strong>Argo CD</strong>) is a RedHat native continuous deployment tool that synchronizes the cluster's state with the configurations stored in Git. Here’s how it helps:</p>
<ul>
<li><p><strong>Automated Continuous Delivery</strong>: Monitor Git repositories for changes, and apply updates automatically to the OpenShift cluster.</p>
</li>
<li><p><strong>Seamless Kustomize Integration</strong>: Manage complex, environment-specific configurations without duplicating YAML files.</p>
</li>
<li><p><strong>Declarative and Auditable</strong>: Track every change made to the cluster through Git, offering full traceability.</p>
</li>
</ul>
<h3 id="heading-high-level-flow-deploying-an-operator-using-openshift-gitops">High-level flow deploying an Operator using OpenShift GitOps</h3>
<p>This section tees up the steps for deploying an operator that is commonly used in the industry. Dynatrace is a well-known observability platform that enables organizations to monitor the performance of their applications and infrastructure effectively. In developing this blog, the decision to showcase the Dynatrace Operator stemmed from its widespread use and recognition within the industry. The focus here is not to provide a comprehensive guide to fully configuring Dynatrace, which involves additional steps like handling subscription keys, but rather to highlight the operator deployment process. This approach emphasizes the importance of automating the deployment in a consistent manner. For those interested in fully configuring Dynatrace, additional information can be found @ <a target="_blank" href="https://www.redhat.com/en/blog/partner-showcase-openshift-app-observability-with-dynatrace-operator">https://www.redhat.com/en/blog/partner-showcase-openshift-app-observability-with-dynatrace-operator</a>.</p>
<p>Below are the overarching setup process to deploy the Dynatrace Operator using Kustomize and OpenShift GitOps.</p>
<h4 id="heading-step-1-set-up-the-folder-structure">Step 1: Set Up the Folder Structure</h4>
<p>Create a structured folder layout for managing the Dynatrace Operator deployment using Kustomize. This structure allows for clear organization and easy management of base and environment-specific configurations.</p>
<pre><code class="lang-plaintext">code├── base
│   ├── kustomization.yaml
│   └── dynatrace-operator-subscription.yaml
├── overlays
│   ├── development
│   │   └── kustomization.yaml
│   ├── staging
│   │   └── kustomization.yaml
│   └── production
│       └── kustomization.yaml
└── argocd
    └── application.yaml
    └── kustomization.yaml
</code></pre>
<ul>
<li><p><strong>Base Folder</strong>: Contains the base configuration for the Dynatrace Operator subscription. This folder holds the essential deployment YAML that can be reused across different environments.</p>
</li>
<li><p><strong>Overlays Folder</strong>: Each environment (development, staging, production) has its own folder containing its specific Kustomization file. Overlays allow for customization of the base configuration without duplicating it.</p>
</li>
<li><p><strong>ArgoCD Folder</strong>: Contains the Argo CD application manifest and its own Kustomization file that defines how Argo CD will manage the deployment of the Dynatrace Operator using Kustomize.</p>
</li>
</ul>
<h4 id="heading-step-2-define-the-base-configuration">Step 2: Define the Base Configuration</h4>
<p>In the base folder, the configuration files specify the core settings for the Dynatrace Operator. This includes deployment manifests and services that are common across all environments. <code>PATCH-ME</code> placeholders provides a convenient way to replace with environment specific values.</p>
<p>Example: <code>base/dynatrace-operator-subscription.yaml</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">operators.coreos.com/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Subscription</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">operators.coreos.com/dynatrace-operator.openshift-operators:</span> <span class="hljs-string">""</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">dynatrace-operator</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">openshift-operators</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">channel:</span> <span class="hljs-string">PATCH-ME</span>  <span class="hljs-comment"># Placeholder for the channel</span>
  <span class="hljs-attr">installPlanApproval:</span> <span class="hljs-string">PATCH-ME</span>  <span class="hljs-comment"># Placeholder for install plan approval</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">dynatrace-operator</span>
  <span class="hljs-attr">source:</span> <span class="hljs-string">certified-operators</span>
  <span class="hljs-attr">sourceNamespace:</span> <span class="hljs-string">openshift-marketplace</span>
</code></pre>
<h4 id="heading-step-2-create-the-base-kustomization-configuration">Step 2: Create the base Kustomization configuration</h4>
<p>This file specifies that the Dynatrace Operator subscription defined in <code>dynatrace-operator-subscription.yaml</code> is a resource that Kustomize should apply.</p>
<p>Example: <code>base/kustomization.yaml</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">kustomize.config.k8s.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Kustomization</span>

<span class="hljs-attr">resources:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">dynatrace-operator-subscription.yaml</span>
</code></pre>
<p><strong>Step 3: Create Environment-Specific Overlays</strong></p>
<p>This file references the base configuration and applies the necessary changes specific to the development environment. The inline patch specifies what values will replace the <code>PATCH-ME</code> placeholders in the base YAML.</p>
<p>Example: <code>overlays/development/kustomization.yaml</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">kustomize.config.k8s.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Kustomization</span>

<span class="hljs-attr">resources:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">../../base</span>

<span class="hljs-attr">patches:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">target:</span>
      <span class="hljs-attr">version:</span> <span class="hljs-string">v1</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Subscription</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">dynatrace-operator</span>
     <span class="hljs-attr">patch:</span> <span class="hljs-string">|
      - op: replace
        path: /spec/channel
        value: alpha  # Dev-specific channel
      - op: replace
        path: /spec/installPlanApproval
        value: Automatic  # Dev-specific install plan approval</span>
</code></pre>
<p><strong>Step 4: Deploy using OpenShift GitOps(ArgoCD)</strong></p>
<p>Create an Argo CD Application manifest that points to the appropriate overlay for deployment. This configuration informs Argo CD to sync the specified environment's configuration to the OpenShift cluster automatically.</p>
<p>Example: Argo CD Application Manifest</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">dynatrace-operator</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">openshift-gitops</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">'https://github.com/your-repo/dynatrace-operator'</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">HEAD</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">overlays/development</span>
    <span class="hljs-attr">kustomize:</span>
      <span class="hljs-attr">namePrefix:</span> <span class="hljs-string">dev-</span>
  <span class="hljs-attr">destination:</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">'https://kubernetes.default.svc'</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">dynatrace-operators</span>
  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">prune:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">true</span>
</code></pre>
<ul>
<li><p><strong>Source Path</strong>: By setting <code>path: overlays/development</code>, Argo CD knows to apply the development overlay and deploy the development-specific configuration.</p>
</li>
<li><p><strong>Destination Namespace</strong>: The <code>namespace: dynatrace-operators</code> defines where the operator will be deployed. This can be different for each environment.</p>
</li>
<li><p><strong>Automated Sync</strong>: The <code>syncPolicy</code> ensures that Argo CD automatically applies changes when they are pushed and merge to Git repo, keeping the cluster state in sync with the desired state in Git.</p>
</li>
</ul>
<h3 id="heading-why-gitops-argo-cd-simplifies-management">Why GitOps (Argo CD) Simplifies Management</h3>
<p>GitOps, particularly via OpenShift GitOps (Argo CD), offers a superior method of managing operator deployments for several reasons:</p>
<ul>
<li><p><strong>Centralized Source of Truth</strong>: The Git repository serves as the single source of truth for all configurations. This ensures consistency across environments and makes it easy to audit changes.</p>
</li>
<li><p><strong>Automated Deployments</strong>: Argo CD automatically syncs any changes in the Git repository to the Kubernetes cluster, ensuring that the cluster state always matches the desired state in Git.</p>
</li>
<li><p><strong>Environment Control</strong>: By using overlays, we can easily customize deployments for different environments (dev, staging, production), while still maintaining a common base configuration.</p>
</li>
<li><p><strong>Operator-Driven Approach</strong>: Since GitOps itself is managed by an operator in ROSA, the operational burden is reduced. However, deploying GitOps operator to the cluster must be done manually or via automation (e.g., Terraform) to begin with, ensuring the operator is in place to manage future deployments.</p>
</li>
</ul>
<h3 id="heading-enabling-openshift-gitops">Enabling OpenShift GitOps</h3>
<p>To enable OpenShift GitOps (Argo CD) in the AWS ROSA cluster, we need to deploy the OpenShift GitOps Operator. This can be done manually through the OpenShift web console or automated using tools like Terraform or Helm. We recommend installing the OpenShift GitOps operator as soon as the cluster is provisioned, treating it as a day 0 activity. From that point forward, GitOps(ArgoCD) should manage all operators and application deployments.</p>
<h3 id="heading-alternatives-to-kustomize-and-gitops">Alternatives to Kustomize and GitOps</h3>
<p>While Kustomize and GitOps provide a streamlined approach to deploying OpenShift Operators, other methods also exist for managing configurations and deployments. Terraform is a powerful tool for provisioning and managing infrastructure, making it an excellent choice for cloud resources. However, when it comes to managing Kubernetes configurations, Terraform might not be the best fit due to its complexity in handling Kubernetes manifests and the potential for configuration drift in dynamic environments.</p>
<p>Another option is Helm, which is a popular package manager for Kubernetes. Helm works well with OpenShift GitOps and can simplify the deployment of complex applications through templating. However, for the purposes of this blog, Kustomize was chosen for its straightforward approach to managing configurations directly within Kubernetes without the need for additional templating.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Using Kustomize and GitOps with the OpenShift GitOps Operator for deploying OpenShift Operators on AWS ROSA provides a scalable, maintainable, and consistent approach to managing Kubernetes configurations. Leveraging the power of Kustomize's native Kubernetes support along with OpenShift GitOps' continuous deployment capabilities facilitates efficient and reliable application deployments. This integration ensures that all changes are tracked in version control, allowing for better collaboration and traceability within teams. By automating deployment processes and managing configurations effectively, DevOps practices become smoother and more robust, leading to improved productivity and operational excellence. With the Red Hat OpenShift GitOps Operator, organizations can confidently streamline their workflows while ensuring consistency across different environments, ultimately enhancing the overall performance and reliability of their Kubernetes applications.</p>
<p><em>\</em>The code snippets provided in this blog server as a framework for understanding the deployment logic. They have been sanitized of real data and should be used for basic guidelines only.*</p>
]]></content:encoded></item><item><title><![CDATA[AI-Powered Cloud Evolution: Transforming Infrastructure Development]]></title><description><![CDATA[Get ready, cloud architects, because the future is intelligent! You've likely heard the buzz around AI swirling in the tech world, and let me tell you, it's not just a passing trend – it's here to stay, and it's reshaping the very foundations of clou...]]></description><link>https://blog.ajosephlive.com/ai-transforming-infrastructure-development</link><guid isPermaLink="true">https://blog.ajosephlive.com/ai-transforming-infrastructure-development</guid><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Ajith Joseph]]></dc:creator><pubDate>Mon, 02 Oct 2023 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/58AiTToabyE/upload/73ef319e50192444298f797ff89b6115.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Get ready, cloud architects, because the future is intelligent! You've likely heard the buzz around AI swirling in the tech world, and let me tell you, it's not just a passing trend – it's here to stay, and it's reshaping the very foundations of cloud infrastructure. Artificial intelligence (AI) is more than just a buzzword; it's a transformative force that's revolutionizing the way we build, manage, and optimize cloud environments. And leading the charge in this revolution is AWS, a powerhouse of innovation and a pioneer in integrating AI technologies into its cloud services.</p>
<h4 id="heading-ai-in-action-a-sneak-peek-at-whats-already-here">AI in Action: A Sneak Peek at What's Already Here</h4>
<p>Even today, AI is silently working behind the scenes in many of your favorite AWS services. Here are just a few examples:</p>
<ul>
<li><p><strong>Amazon EC2 Auto Scaling:</strong> This service leverages machine learning to automatically adjust the number of EC2 instances you have running based on real-time demand. No more scrambling during peak hours – AI ensures you have the resources you need, when you need them, while optimizing costs.</p>
</li>
<li><p><strong>Amazon DynamoDB:</strong> This NoSQL database service utilizes AI for adaptive capacity management. AI analyzes usage patterns and automatically scales storage and throughput to meet fluctuating workloads. Say goodbye to performance bottlenecks and hello to a responsive database.</p>
</li>
<li><p><strong>Amazon Comprehend:</strong> This natural language processing (NLP) service uses AI to extract insights from your text data. Imagine using Comprehend to automatically tag and categorize customer support tickets, enabling faster resolution times.</p>
</li>
</ul>
<p>These are just a taste of the many ways AI is already embedded within AWS. But the future holds even more exciting possibilities.</p>
<h4 id="heading-the-ai-powered-cloud-architect-a-glimpse-into-tomorrow">The AI-Powered Cloud Architect: A Glimpse into Tomorrow</h4>
<p>As AI continues to evolve, cloud architects can expect to see a significant shift in their approach:</p>
<ul>
<li><p><strong>Proactive Problem Solving:</strong> AI-powered tools will proactively identify potential issues before they become outages. Imagine an AI system that detects anomalies in resource utilization and recommends corrective actions, preventing downtime before it even starts. AWS's upcoming AI Ops services promise to revolutionize incident management by leveraging AI-driven analytics to detect, diagnose, and resolve issues in real-time.</p>
</li>
<li><p><strong>Self-Optimizing Infrastructure:</strong> Cloud infrastructure will become self-aware, automatically scaling and optimizing resources based on real-time demands and workload patterns. This frees you from manual configuration tasks, allowing you to focus on higher-level strategic initiatives. AWS's vision for autonomous cloud infrastructure includes AI-driven resource optimization, dynamic workload management, and predictive capacity planning, ensuring optimal performance and cost-efficiency at all times.</p>
</li>
<li><p><strong>Improved Security Posture:</strong> AI will play a crucial role in safeguarding your cloud environment. AI-powered security solutions can analyze network traffic and user behavior to identify and thwart cyber threats in real-time. AWS's security offerings, such as Amazon GuardDuty and AWS Security Hub, leverage AI and machine learning to provide continuous threat detection, automated remediation, and actionable insights, enhancing the overall security posture of your cloud infrastructure.</p>
</li>
</ul>
<h4 id="heading-why-this-matters-the-power-of-an-intelligent-cloud">Why This Matters: The Power of an Intelligent Cloud</h4>
<p>The adoption of AI in cloud infrastructure isn't just about bells and whistles; it's about unlocking a new level of efficiency, security, and agility. Here's how:</p>
<ul>
<li><p><strong>Reduced Costs:</strong> AI-driven automation helps eliminate wasteful resource allocation, leading to significant cost savings. By rightsizing instances, optimizing storage usage, and minimizing downtime, AI empowers organizations to maximize their cloud investments and achieve greater ROI.</p>
</li>
<li><p><strong>Enhanced Agility:</strong> The ability to automatically scale resources based on demand allows you to respond quickly to changing business needs. With AI-powered elasticity, you can seamlessly accommodate spikes in traffic, launch new services faster, and experiment with innovative ideas without the fear of resource constraints.</p>
</li>
<li><p><strong>Improved Security:</strong> AI-powered threat detection and prevention systems provide an extra layer of protection for your sensitive data. By continuously analyzing vast amounts of telemetry data and identifying suspicious patterns, AI enhances threat visibility, reduces response times, and strengthens overall resilience against cyber threats.</p>
</li>
<li><p><strong>Focus on Innovation:</strong> By automating routine tasks, AI frees up your valuable time to focus on strategic initiatives and innovation. Whether it's developing new features, exploring emerging technologies, or driving digital transformation initiatives, AI empowers cloud architects to become agents of change and innovation within their organizations.</p>
</li>
</ul>
<h4 id="heading-embrace-the-future-becoming-an-ai-ready-cloud-architect">Embrace the Future: Becoming an AI-Ready Cloud Architect</h4>
<p>The future of cloud infrastructure is intelligent, and AWS is at the forefront of this revolution. Here are some steps you can take to become an AI-ready cloud architect:</p>
<ul>
<li><p><strong>Stay Curious:</strong> Keep yourself updated on the latest advancements in AI and how they can be applied to cloud infrastructure. Explore resources like the <a target="_blank" href="https://aws.amazon.com/blogs/aws/category/artificial-intelligence/">AWS AI Blog</a>, the <a target="_blank" href="https://www.youtube.com/user/AmazonWebServices/Cloud">Amazon YouTube Channel</a>, and the <a target="_blank" href="https://aws.amazon.com/blogs/apn/">APN Partner Blog</a> to delve into the latest advancements and discover how experts are leveraging AI to build intelligent cloud solutions.</p>
</li>
<li><p><strong>Embrace Experimentation:</strong> Don't be afraid to experiment with new AI-powered AWS services. Leverage AWS's free tier and sandbox environments to explore AI capabilities, test use cases, and gain hands-on experience without incurring additional costs. There are also free Demo and hands on tutorial available for service such as <a target="_blank" href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a> and <a target="_blank" href="https://aws.amazon.com/sagemaker/">Amazon SageMaker</a>.</p>
</li>
<li><p><strong>Develop New Skills:</strong> Start building your knowledge of AI concepts and machine learning principles. Enroll in AWS training courses, explore online tutorials and resources, and participate in hands-on labs to develop your AI expertise and position yourself as a leader in AI-driven cloud architecture.</p>
</li>
</ul>
<p>The future of cloud infrastructure is intelligent, and with the help of AI, we can build more robust, efficient, and scalable cloud environments. So, buckle up, cloud architects, the future is intelligent, and it's going to be an amazing ride!</p>
]]></content:encoded></item><item><title><![CDATA[First Impression:  Mountpoint - Mounting an Amazon S3 as a local file system]]></title><description><![CDATA[Introduction
Ever since AWS announced the alpha release for Mounpoint back on Mar 14, 2023, I have been eagerly waiting for the integration to be generally available. The idea of seamlessly mounting S3 buckets as if they were local drives holds immen...]]></description><link>https://blog.ajosephlive.com/first-impression-mountpoint-mounting-an-amazon-s3-as-a-local-file-system</link><guid isPermaLink="true">https://blog.ajosephlive.com/first-impression-mountpoint-mounting-an-amazon-s3-as-a-local-file-system</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS s3]]></category><category><![CDATA[aws ec2]]></category><dc:creator><![CDATA[Ajith Joseph]]></dc:creator><pubDate>Fri, 11 Aug 2023 21:43:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/sWOvgOOFk1g/upload/10730a2d65c0c3770a16cc95bd0e52df.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>Ever since AWS <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/03/mountpoint-amazon-s3/">announced</a> the alpha release for Mounpoint back on Mar 14, 2023, I have been eagerly waiting for the integration to be generally available. The idea of seamlessly mounting S3 buckets as if they were local drives holds immense potential for transforming the way we handle data storage and access. And now, with the feature finally becoming <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/08/mountpoint-amazon-s3-generally-available/">generally available</a>, the moment I've been waiting for has arrived. In this article, I'll be sharing my initial thoughts and experience as I dive into testing the Mountpoint tool for Amazon S3.</p>
<p>Mountpoint, an open-source project for Amazon S3 is a testament to the dynamic evolution of cloud storage solutions that we have been witnessing in the past few years. Conceived to address the growing demand for simplified access to data storage, the <a target="_blank" href="https://github.com/awslabs"><strong>AWSLABS</strong></a> developer community has done a tremendous job in making sure that the tool is easy to use and is an enterprise-ready client that supports performant access to S3 at scale.</p>
<h3 id="heading-what-is-tool-fit-for"><strong>What is Tool Fit for:</strong></h3>
<ul>
<li><p>An Amazon S3 mounted EC2 instance allows the use of native commands, shell commands &amp; library functions like 'ls', 'cat', 'cp', 'touch', 'grep', 'open' etc. to list, read and interact with files.</p>
</li>
<li><p>It can be a great tool for data lake architectures and use cases, enabling seamless access to read large objects stored in S3 to multiple instances concurrently without the need of downloading them to local storage first.</p>
</li>
<li><p>It could be a great tool to simplify uploading and downloading files from S3, sharing and transferring files across local and cloud storage while taking advantage of S3 scale and durability.</p>
</li>
<li><p>It could facilitate collaborative workflows by providing a unified platform for storing, accessing and interacting with shared objects directly from local environments.</p>
</li>
</ul>
<h3 id="heading-what-the-tool-is-not">What the Tool is Not:</h3>
<ul>
<li><p>While Mountpoint offers seamless access to S3 files, it's not optimized for real-time collaborative editing scenarios. It only supports writing only to new files, and writes to new files must be made sequentially.</p>
</li>
<li><p>While Mountpoint provides a local storage feel, it's not a replacement for traditional local storage solutions. It compliments local storage by harnessing the benefits of cloud resources.</p>
</li>
<li><p>Mountpoint does not implement all POSIX file system features. It doesn't support advanced file operations such as locking, file permissions and ownership.</p>
</li>
</ul>
<h3 id="heading-installing-the-tool">Installing the tool</h3>
<p>Installing the Mountpoint for Amazon S3 was simple and straightforward. On my Amazon Linux EC2 instance, I made use of the RPM package and installed it using the 'yum' command.</p>
<pre><code class="lang-bash">$ wget https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.rpm
$ sudo yum install ./mount-s3.rpm
</code></pre>
<p>While the Mountpoint client is designed to automatically pick up the credentials from an IAM role associated with the instance, I am using the AWS credentials from environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY since I am testing on an AWS sandbox playground.</p>
<pre><code class="lang-bash">$ mkdir mountp-test-fs
$ mount-s3 mountp-test-bucket mountp-test-fs
bucket mountp-test-bucket is mounted at mountp-test-fs
</code></pre>
<p>Changing the directory to the newly mounted folder and creating new files:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">cd</span> mountp-test-fs
$ mkdir test-folder
$ ls -l
total 0
drwxr-xr-x. 2 ec2-user ec2-user 0 Aug 11 17:50 test-folder
<span class="hljs-comment"># Create new file</span>
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello Mounted S3 Bucket"</span> &gt; TestFile_Write.txt
$ ls
TestFile_Write.txt
<span class="hljs-comment">#View the file</span>
$ cat TestFile_Write.txt
Hello Mounted S3 Bucket
<span class="hljs-comment">#Find the line number for the word 'S3' in the file using grep </span>
$ grep -n <span class="hljs-string">'S3'</span> TestFile_Write.txt | wc -l
1
<span class="hljs-comment">#Find the line number for the word 'S3' in the file using sed</span>
$ sed -n <span class="hljs-string">'/S3/='</span> TestFile_Write.txt 
1
</code></pre>
<p><strong>S3 bucket</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691780051164/b277d04a-660b-4115-addd-af7a40bdec69.png" alt class="image--center mx-auto" /></p>
<p>As you can it's pretty simple to integrate with S3 and use the native shell/bash commands to interact with the objects within the bucket. However, as mentioned before one of the limitations is that we cannot edit or update an existing file using mountpoint (Maybe a feature that could be enabled in the future).</p>
<pre><code class="lang-bash"><span class="hljs-comment">#Error during updates to the file objects </span>
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, I am trying to update the file with new content."</span> &gt; TestFile_Write.txt
-bash: TestFile_Write.txt: Operation not permitted
</code></pre>
<p><strong>Logging</strong></p>
<p>As an Infrastructure Architect, I am constantly committed to maintaining operations and analyzing errors. So after simulating various error scenarios, such as directory deletions and removing files from the S3 bucket, I was able to view the logs via syslog. I found that Mountpoint only logs high-severity events and verbose logging can be enabled with the --debug option.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#Sample of error logs from syslog during my testing </span>
$ journalctl -e SYSLOG_IDENTIFIER=mount-s3
Aug 11 19:53:09 mount-s3[58999]: [WARN] lookup{req=1408 ino=1 name=<span class="hljs-string">"test-folder"</span>}: mountpoint_s3::fuse: lookup failed: inode error: file does not exist
Aug 11 19:53:09 mount-s3[58999]: [WARN] lookup{req=1410 ino=1 name=<span class="hljs-string">"test-folder"</span>}: mountpoint_s3::fuse: lookup failed: inode error: file does not exist
Aug 11 19:53:46 mount-s3[58999]: [WARN] lookup{req=1454 ino=1 name=<span class="hljs-string">"test-folder"</span>}: mountpoint_s3::fuse: lookup failed: inode error: file does not exist
Aug 11 19:53:51 mount-s3[58999]: [WARN] lookup{req=1460 ino=1 name=<span class="hljs-string">"TestFile_Write.txt"</span>}: mountpoint_s3::fuse: lookup failed: inode error: file does not exist
</code></pre>
<h3 id="heading-summary">Summary</h3>
<p>Mountpoint integration simplifies data collaboration, provides a local storage feel, and is easy to set up and use. It will prove to be an amazing tool to seamlessly transfer and manage large datasets from Amazon S3 to a local environment for data analysis, without the need for time-consuming downloads or compromising on storage capacity. However, it has to be noted that it does not facilitate real-time editing capabilities and offline access to the data is not supported since it requires an active internet connection to access S3 resources. I also wished that it can be integrated with CloudWatch natively so that we can easily monitor and analyze traffic and errors. Overall, it's a valuable asset for cloud workflows, bridging the gap between cloud and local storage with convenience, flexibility and scalability.</p>
]]></content:encoded></item><item><title><![CDATA[Walk-through: Amazon CloudFront signed URLs & custom domains to securely serve Amazon S3 contents]]></title><description><![CDATA[Contributors:

Ajith Joseph, Manager, Deloitte;

Noel Arzadon, Specialist Master, Deloitte;


Introduction
Most organizations that distribute content over the internet want to restrict access to the documents, data and content in their Amazon S3 buck...]]></description><link>https://blog.ajosephlive.com/walk-through-aws-cloudfront-signed-urls-custom-domains-to-securely-serve-aws-s3-contents</link><guid isPermaLink="true">https://blog.ajosephlive.com/walk-through-aws-cloudfront-signed-urls-custom-domains-to-securely-serve-aws-s3-contents</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[cloudfront]]></category><dc:creator><![CDATA[Ajith Joseph]]></dc:creator><pubDate>Tue, 23 May 2023 17:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/1cqIcrWFQBI/upload/d47850e4116bb9383adaa5c75f4ef5e9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Contributors:</p>
<ul>
<li><p><strong><em>Ajith Joseph, Manager, Deloitte;</em></strong></p>
</li>
<li><p><strong><em>Noel Arzadon, Specialist Master, Deloitte;</em></strong></p>
</li>
</ul>
<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>Most organizations that distribute content over the internet want to restrict access to the documents, data and content in their Amazon S3 buckets. While Amazon S3 provides REST APIs and associated IAM roles/policies provide easy access to the files and documents within the bucket, sometimes there is a need to distribute these objects within the Amazon S3 bucket without exposing the S3 bucket names that are embedded in the direct access URLs.</p>
<p>For example, <a target="_blank" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html">Amazon S3 Pre-signed URLs</a> are great when it comes to distributing content with temporary access to specific documents. One of the most common use cases for <a target="_blank" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html">S3 pre-signing</a> is distributing documents like correspondence or decisions to the customers where they will receive an email with a link to specific documents which can be viewed and downloaded via a web browser. However, a drawback of this approach is that the S3 bucket name will be visible to the customer since S3 pre-signing process utilizes the same direct S3 REST APIs to generate temporary access to the objects within the bucket.</p>
<p>The below figure shows an example of an S3 pre-signed URL with temporary access to a document. This method will expose the bucket name where it is stored.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684776896157/cc0d36b0-2f21-46b2-866d-47564d5129ab.png" alt="Figure 1: S3 pre-signing exposes the bucket name in its URL" class="image--center mx-auto" /></p>
<p>This could be a deal breaker for organizations that have stringent security rules. While there are many ways to overcome such situations, utilizing a Amazon CloudFront distribution to distribute S3 content could be a possible solution that can be implemented swiftly while adhering to all security needs.</p>
<p><strong>Amazon CloudFront for signing and custom domain names</strong></p>
<p>Amazon CloudFront being one of the most popular CDN services is a great way to distribute content from S3 buckets, also there are additional features that one can take advantage of such as caching, edge processing, geographic restrictions etc. The idea of restricting the users to access S3 private content by requiring viewers to use CloudFront signed URLs and ensuring that the bucket can be accessed only via CloudFront Origin Access Control (OAC) can greatly enhance the security posture of any organization.</p>
<p>The below figure Illustrates how OAC and bucket policies can prevent access to S3 buckets using direct URLs. (Ideally, WAF can also be utilized to enhance CloudFront origin security but since this blog is a beginner's guide to CloudFront signed URLs, we won't be deploying WAF).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684854704449/3d07dfc9-c73d-463f-883d-9e99e2f8599a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-objectives">Objectives:</h3>
<p>In this walk-through, you will see examples of using Amazon CloudFront for:</p>
<ul>
<li><p>Signed URLs to restrict access to S3 content and mask S3 bucket name in the URL</p>
</li>
<li><p>Sample Python code to generate signed URLs</p>
</li>
<li><p>Testing of signed URLs to download/view a test document from S3</p>
</li>
</ul>
<h3 id="heading-walk-through-steps"><strong>Walk-through Steps:</strong></h3>
<p><strong>Pre-requisites:</strong></p>
<ul>
<li><p>Understanding of OpenSSL, IaC such as CloudFormation, CloudFront Distribution &amp; DNS</p>
</li>
<li><p>Basic python programming</p>
</li>
</ul>
<p><strong>Step 1:Create the key pair to be associated with the CloudFront signers</strong></p>
<p>The signer will use the private key to sign the URL and the CloudFront utilizes the public key to verify the signature. In this blog(Step 4) we will explain how to use a simple Python module to sign the URLs using private keys and generate signed URLs that are valid for a specified expiry time. First, let's create the key pairs required.</p>
<p>Note: The key pair must be SSH-2 RSA, base64 encoded PEM format and 2048 bit.</p>
<p>Using the OpenSSL, we can generate an RSA key pair of 2048 bits and save it as private_key.pem</p>
<pre><code class="lang-bash">openssl genrsa -out private_key.pem 2048
</code></pre>
<p>Now extract the public key out of the private key using the following command</p>
<pre><code class="lang-bash">openssl rsa -pubout -<span class="hljs-keyword">in</span> private_key.pem -out public_key.pem
</code></pre>
<p><strong>Step 2:Provisioning CloudFront distribution and adjust S3 bucket permissions</strong></p>
<p>Below is a code snippet to create Origin Access Control via Cloudfromation to securely access the S3 content</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">CloudblogCloudFrontOAC:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::CloudFront::OriginAccessControl</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">OriginAccessControlConfig:</span>
        <span class="hljs-attr">Description:</span> <span class="hljs-string">Origin</span> <span class="hljs-string">access</span> <span class="hljs-string">control</span> <span class="hljs-string">for</span> <span class="hljs-string">signed</span> <span class="hljs-string">urls</span> <span class="hljs-string">to</span> <span class="hljs-string">S3</span>
        <span class="hljs-attr">Name:</span> <span class="hljs-string">CBOAC-S3</span>
        <span class="hljs-attr">OriginAccessControlOriginType:</span> <span class="hljs-string">s3</span>
        <span class="hljs-attr">SigningBehavior:</span> <span class="hljs-string">always</span>
        <span class="hljs-attr">SigningProtocol:</span> <span class="hljs-string">sigv4</span>
</code></pre>
<p>After the OAC is created, associate it with a CloudFront Distribution and also if available we will highly recommend using a custom domain name and associating it with the distribution. (This blog doesn't have steps on how a domain can be created, I have used a popular domain service provider to create the domain used in this blog)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">DSCloudFrontDistribution:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::CloudFront::Distribution</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">DistributionConfig:</span>
        <span class="hljs-attr">WebACLId:</span> <span class="hljs-string">"arn:aws:wafv2:us-east-1:XXXXXXXXXXX:global/webacl/CB-WebACL-ForBlog/"</span>
        <span class="hljs-attr">Aliases:</span> 
          <span class="hljs-bullet">-</span> <span class="hljs-string">cftest.cloudblog.ajosephlive.com</span>
        <span class="hljs-attr">ViewerCertificate:</span> 
          <span class="hljs-attr">AcmCertificateArn:</span> <span class="hljs-string">"arn:aws:acm:us-east-1:XXXXXXXXXXXXX:certificate/XXXXXXXXXXXXX"</span>
          <span class="hljs-attr">MinimumProtocolVersion:</span> <span class="hljs-string">TLSv1.2_2021</span>
          <span class="hljs-attr">SslSupportMethod:</span> <span class="hljs-string">sni-only</span>  
        <span class="hljs-attr">Origins:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">Id:</span> <span class="hljs-string">CBS3Origin</span>
            <span class="hljs-attr">DomainName:</span> <span class="hljs-string">"    test-bucket-in-us-east-2-cloudblog.s3.us-east-2.amazonaws.com"</span>
            <span class="hljs-attr">OriginAccessControlId:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">CloudblogCloudFrontOAC</span>
            <span class="hljs-attr">S3OriginConfig:</span>
              <span class="hljs-attr">OriginAccessIdentity:</span> <span class="hljs-string">""</span>

        <span class="hljs-attr">Enabled:</span> <span class="hljs-string">'true'</span>
        <span class="hljs-attr">Comment:</span> <span class="hljs-string">S3</span> <span class="hljs-string">CloudFront</span>
        <span class="hljs-attr">DefaultCacheBehavior:</span>
          <span class="hljs-attr">TargetOriginId:</span> <span class="hljs-string">CBS3Origin</span>
          <span class="hljs-attr">AllowedMethods:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">GET</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">HEAD</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">OPTIONS</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">PATCH</span>
          <span class="hljs-attr">ViewerProtocolPolicy:</span> <span class="hljs-string">redirect-to-https</span>
          <span class="hljs-attr">CachePolicyId:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">CBCloudFrontCachePolicy</span>

        <span class="hljs-attr">IPV6Enabled:</span> <span class="hljs-literal">false</span>
        <span class="hljs-attr">CacheBehaviors:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">AllowedMethods:</span>
              <span class="hljs-bullet">-</span> <span class="hljs-string">GET</span>
              <span class="hljs-bullet">-</span> <span class="hljs-string">HEAD</span>
            <span class="hljs-attr">TargetOriginId:</span> <span class="hljs-string">DSS3Origin</span>
            <span class="hljs-attr">PathPattern:</span> <span class="hljs-string">/media/*</span>
            <span class="hljs-attr">ViewerProtocolPolicy:</span> <span class="hljs-string">redirect-to-https</span>
            <span class="hljs-attr">CachePolicyId:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">CBCloudFrontCachePolicy</span>

        <span class="hljs-attr">CustomErrorResponses:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">ErrorCode:</span> <span class="hljs-string">'404'</span>
            <span class="hljs-attr">ResponsePagePath:</span> <span class="hljs-string">"/error-pages/404.html"</span>
            <span class="hljs-attr">ResponseCode:</span> <span class="hljs-string">'200'</span>
            <span class="hljs-attr">ErrorCachingMinTTL:</span> <span class="hljs-string">'30'</span>
</code></pre>
<p>You should see the CloudFront distribution deployed similar to the below figure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684867632683/48929b3a-ae81-438e-a3ee-5d45c2691c55.png" alt class="image--center mx-auto" /></p>
<p>Update the bucket permission to allow access only from CloudFront using OAC. A sample JSON policy is below:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: {
        <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"AllowCloudFrontServicePrincipalReadOnly"</span>,
        <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
        <span class="hljs-attr">"Principal"</span>: {
            <span class="hljs-attr">"Service"</span>: <span class="hljs-string">"cloudfront.amazonaws.com"</span>
        },
        <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:GetObject"</span>,
        <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::test-bucket-in-us-east-2-cloudblog/*"</span>,
        <span class="hljs-attr">"Condition"</span>: {
            <span class="hljs-attr">"StringEquals"</span>: {
                <span class="hljs-attr">"AWS:SourceArn"</span>: <span class="hljs-string">"arn:aws:cloudfront::&lt;AWS account ID&gt;:distribution/&lt;CloudFront distribution ID&gt;"</span>
            }
        }
    }
}
</code></pre>
<p><strong>Step 3: Uploading public key to CloudFront</strong></p>
<p>Now upload the public key that was created in <strong>Step 1</strong> to CloudFront public keys. Also, create a key group for this public key.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684868525315/2d53df8e-08f0-4749-8db6-f1166a63c5b7.png" alt class="image--center mx-auto" /></p>
<p>Associate the distribution created to use this key group and most importantly restrict viewer access to use signed URLs. This will ensure that CloudFront origin is only accessible via signed URLs.</p>
<p>Below updates to the CloudFormation template for the TrustedKeyGroups parameter should do the trick.</p>
<pre><code class="lang-yaml">        <span class="hljs-attr">CacheBehaviors:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">AllowedMethods:</span>
              <span class="hljs-bullet">-</span> <span class="hljs-string">GET</span>
              <span class="hljs-bullet">-</span> <span class="hljs-string">HEAD</span>
            <span class="hljs-attr">TargetOriginId:</span> <span class="hljs-string">DSS3Origin</span>
            <span class="hljs-attr">TrustedKeyGroups:</span> 
              <span class="hljs-bullet">-</span> <span class="hljs-string">cloudblog-ajoseph-test-pub-key-group</span>
</code></pre>
<p>Verifying the update via the CloudFront console should look like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684868940218/b21e1cd4-7f02-4e4f-b1f1-36cf7b1af590.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 4: Code for generating signed URLs</strong></p>
<p>For testing the CloudFront signing process, we created a lambda function written in Python which utilizes the private key generated in Step 1 and produces a Signed URL.</p>
<p>Note: Since we are importing modules from RSA in the below code snippet, we had to deploy a <a target="_blank" href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.html">Lambda Layer</a> with the rsa package.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> botocore.signers <span class="hljs-keyword">import</span> CloudFrontSigner
<span class="hljs-keyword">import</span> rsa
<span class="hljs-keyword">import</span> base64
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> datetime, timedelta

ssm_client = boto3.client(<span class="hljs-string">'ssm'</span>)

<span class="hljs-comment">#In this example we are using parameter store to store the key pair values; for better secuirty pls store them in Secrets Manager or similar.</span>
public_key_id = ssm_client.get_parameter(Name=<span class="hljs-string">"/cloudblog/ajoseph/signed-url/public-key-id"</span>,WithDecryption=<span class="hljs-literal">True</span>)
private_key_data_dict = ssm_client.get_parameter(Name=<span class="hljs-string">"/cloudblog/ajoseph/signed-url/private-key"</span>,WithDecryption=<span class="hljs-literal">True</span>)
private_key_data=private_key_data_dict[<span class="hljs-string">'Parameter'</span>][<span class="hljs-string">'Value'</span>]  

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">rsa_signer</span>(<span class="hljs-params">message</span>):</span>
    private_key = private_key_data
    <span class="hljs-keyword">return</span> rsa.sign(
        message,
        rsa.PrivateKey.load_pkcs1(private_key.encode(<span class="hljs-string">'utf8'</span>)),<span class="hljs-string">'SHA-1'</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    <span class="hljs-keyword">try</span>:
        query_string=event[<span class="hljs-string">'queryStringParameters'</span>][<span class="hljs-string">"filename"</span>]
    <span class="hljs-keyword">except</span>:
        query_string=<span class="hljs-literal">None</span> 
    key_id = public_key_id
    url = <span class="hljs-string">"https://cftest.cloudblog.ajosephlive.com/"</span>+query_string
    cf_signer = CloudFrontSigner(key_id , rsa_signer)
    expire_date = datetime.utcnow() + timedelta(hours=<span class="hljs-number">1</span>) <span class="hljs-comment"># expires in 1 hour</span>
    <span class="hljs-comment"># Signing with a canned policy::</span>
    signed_url = cf_signer.generate_presigned_url(url, date_less_than=expire_date)

    <span class="hljs-keyword">return</span> {
       <span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>,
       <span class="hljs-string">'body'</span>: signed_url
    }
</code></pre>
<p><strong>Step 5: Testing the signed URLs</strong></p>
<p>Executing the Python code from above provides a CloudFront signed URL that has temporary access to the file TestFile.pdf stored in test-bucket-in-us-east-2-cloudblog. From the below screen print, it is evident that the bucket name is no longer visible in the signed URL.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684870856001/fd36279e-ce2d-4796-beec-124f983ab509.png" alt class="image--center mx-auto" /></p>
<p>This signed URL can now be used in any web browser to download/view the file until the expiry time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684871436765/7c88ffa5-e461-4692-8989-c2ffe1a7db14.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In this brief walk-through, we have learned that configuring CloudFront Signed URLs to require user access files in Amazon S3 is a great way of enhancing security, masking bucket names and controlling access to contents stored in AWS. We followed a very simple example and did not attempt to build a production-grade architecture. Utilize the basic concepts of Signed URLs taught here along with KMS keys for encryption, AWS WAF for inspection, custom headers validation, and lambda@edge functions to create a robust architecture to distribute Amazon S3 contents.</p>
]]></content:encoded></item></channel></rss>