<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[d.evops.net]]></title><description><![CDATA[d.evops.net]]></description><link>https://d.evops.net/</link><generator>Ghost 3.17</generator><lastBuildDate>Mon, 06 Apr 2026 00:32:31 GMT</lastBuildDate><atom:link href="https://d.evops.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Alpine Linux dockerized application development with Windows 10 + WSLv2 + Docker + CLion]]></title><description><![CDATA[<p>This is a quick post to describe how I use CLion on Ubuntu with WSLv2 to develop applications for Alpine Linux in Docker.</p><p>First, install WSLv2  using the following instructions: <a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">https://docs.microsoft.com/en-us/windows/wsl/install-win10</a>.  </p><p>Once you have WSLv2 with Ubuntu setup, you can access Linux from</p>]]></description><link>https://d.evops.net/alpine-linux-dockerized-application-development-with-windows-10-wslv2-docker-clion/</link><guid isPermaLink="false">5ee3a2ccb205760001fc928b</guid><dc:creator><![CDATA[Emilio Sanchez]]></dc:creator><pubDate>Mon, 05 Oct 2020 15:59:31 GMT</pubDate><content:encoded><![CDATA[<p>This is a quick post to describe how I use CLion on Ubuntu with WSLv2 to develop applications for Alpine Linux in Docker.</p><p>First, install WSLv2  using the following instructions: <a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">https://docs.microsoft.com/en-us/windows/wsl/install-win10</a>.  </p><p>Once you have WSLv2 with Ubuntu setup, you can access Linux from the <code>cmd.exe</code> using the command <code>wsl</code>.<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d.evops.net/content/images/2020/06/image-1.png" class="kg-image"><figcaption>cmd.exe -&gt; wsl -&gt; Inside Linux now!</figcaption></figure><p>The only problem here is... well, <code>cmd.exe</code> is a terrible terminal with limited features.  I come from the Linux side of things, and I LOVE using <code><a href="https://gnometerminator.blogspot.com/p/introduction.html">terminator</a></code>.  Therefore, the first thing I'm going to do is find a way to skip the <code>cmd.exe</code> -&gt; <code>wsl</code>  routine and run <code>terminator</code> for my terminal needs.   Spoiler warning: the general strategy used for running <code>terminator</code> on Windows is more or less the same strategy used to get any Linux UI app running on Windows.</p><h2 id="step-1-get-a-windows-xserver">Step 1: Get a Windows xserver</h2><p>   The most used windows xserver solution I believe is  <a href="https://sourceforge.net/projects/vcxsrv/">vcxsrv</a> and they provide tutorials in their <a href="https://sourceforge.net/p/vcxsrv/wiki/VcXsrv%20%26%20Win10/">wiki </a>to run Linux apps on windows.   However, I decided to use a paid option called <a href="https://x410.dev/">X410 </a>, which in my opinion, has better integration with Windows.<br><br>Once installed and running, we will need to enable "Allow Public Access".  Depending on your network, some security considerations will need to be taken so please be aware.  You don't want some randos opening X11 applications remotely in your computer or worse!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d.evops.net/content/images/2020/07/image-3.png" class="kg-image"><figcaption>x410: Allow Public Access</figcaption></figure><p>Complementary, the x410 website also provides a nice tutorial  for using it with wsl2, which uses a  similar approch as us.   Check them out at <a href="https://x410.dev/cookbook/wsl/using-x410-with-wsl2/">https://x410.dev/cookbook/wsl/using-x410-with-wsl2/</a></p><h2 id="step-2-install-terminator">Step 2:  Install Terminator</h2><p>Now, to install Terminator, do the <code>cmd.exe</code> -&gt; <code>wsl</code>  dance to get into the Linux console and install it. <code>sudo apt update &amp;&amp; sudo apt install terminator</code></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d.evops.net/content/images/2020/07/image.png" class="kg-image"><figcaption>Installing Terminator terminal</figcaption></figure><p>Once installed, and with X410 up and running.<br><br>* THIS IS PART 1, will be extended soon*</p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Dockerized prediction.io]]></title><description><![CDATA[<hr><h2 id="overview">Overview</h2><p><strong>Prediction.IO</strong> is an Open Source Machine Learning Server.<br>During a conversation with a good friend, I was informed that he and his team were having problems setting up an official stack and using the engine for their code.  He suggested that having a dockerized version of the stack</p>]]></description><link>https://d.evops.net/dockerized-prediction-io/</link><guid isPermaLink="false">5ed808b6b205760001fc926e</guid><category><![CDATA[docker]]></category><category><![CDATA[prediction.io]]></category><dc:creator><![CDATA[Emilio Sanchez]]></dc:creator><pubDate>Wed, 03 Jun 2020 20:32:38 GMT</pubDate><content:encoded><![CDATA[<hr><h2 id="overview">Overview</h2><p><strong>Prediction.IO</strong> is an Open Source Machine Learning Server.<br>During a conversation with a good friend, I was informed that he and his team were having problems setting up an official stack and using the engine for their code.  He suggested that having a dockerized version of the stack would help.</p><p>After thinking about this for a while, I searched for some previous works in github to see if anyone has done any attempts to dockerized the solution.  Indeed, I found a old project at <a href="https://github.com/vovimayhem/docker-prediction-io">https://github.com/vovimayhem/docker-prediction-io</a>, by Roberto Quintanilla, however, there were some problems with it:</p><ol><li>It hasn't been updated in more than a year</li><li>It has internal dependencies that were not included in the project</li><li>It used <strong>postgresql</strong> instead of <strong>elasticsearch</strong></li><li>Even after recreating the internal dependencies that it used, I ran into some ssl problems, hence, I couldn't run tests to confirm it was working correctly.</li></ol><p>I decided to take up the task of updating this found version and to update the solution using the ActionML's PredictionIO V0.9.7-Aml version.</p><p>For those wondering about the differences between the standard Prediction.IO and the ActionML Prediction.IO version, there's a comparison provided by ActionML on their website <a href="http://actionml.com/docs/pio_by_actionml"> here</a>.</p><p>This version has the added benefit of working with the <a href="https://actionml.com/docs/ur">The Universal Recommender</a> which I used to test that the stack was working correctly.</p><p>In this post, I'll go into detail on how to setup this solution in a local computer and how to run <a href="https://github.com/actionml/template-scala-parallel-universal-recommendation">The Universal Recommender Template</a> as a test to confirm everything is working as it should.</p><h2 id="setting-up">Setting up</h2><p>First, clone the repo from <a href="https://github.com/krusmir/docker-prediction-io">https://github.com/krusmir/docker-prediction-io</a> and go to the directory you setup for it.</p><p>Make sure to run afterwards:</p><pre><code>git submodule init &amp;&amp; git submodule update
</code></pre><p>This will pull the  <strong>Universal Recommender Template</strong> which will be used for testing later on.</p><p>For building the stack, run:</p><pre><code>docker-compose -p TestEnv build
</code></pre><p>Drink a cup of coffee, juice, or whatever you fancy, since this will take a while creating and compiling the predition.io docker image.</p><p>While you wait for it to build, you can check the <strong>dockerfile</strong> for <strong>prediction.io</strong>.  You will notice that the image is not optimized (ie.  running multiple commands per RUN statement and similar tricks).  This is done on purpose, since it is quite frustrating and time consuming to have an error while downloading (if your internet connection is intermitent as mine) and debugging to find where the build went wrong.  I rather have a bigger image, where I can backtrack if an error is found than optimizing the docker image size.  Feel free to combine all the statements if you feel optimizing the docker image is more important than easily backtracking and adding custom commands in the <strong>dockerfile</strong> if you deem necessary.</p><hr><p><em>... Enjoy your beverage now ...</em></p><hr><p>Ok, so if you are here, the build must have built successfully.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/y9094Yc.png" class="kg-image" alt="Successful Build"></figure><p>If your screen looks different, that's ok.  I had previously built the solution, so it will look different from the first time building it.</p><p>Before proceeding, a pet peeve of mine, is to have the rest of the images ready before starting the stack, so if you are like me, do:</p><pre><code>docker-compose -p TestEnv pull
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/eryh99k.png" class="kg-image" alt="docker-compose -p TestEnv pull"></figure><p>Otherwise, just do:</p><pre><code>docker-compose -p TestEnv up -d
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/8cQqPZH.png" class="kg-image" alt="docker-compose -p TestEnv up -d"></figure><p>To see the logs and confirm the application is working:</p><pre><code>docker-compose -p TestEnv logs -f
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/L4pblo4.png" class="kg-image" alt="docker-compose -p TestEnv logs -f"></figure><p>If all seems right, congrats!  you have a Prediction.IO stack running.</p><p>Now, let's run some tests to confirm everything is working as it should.</p><h2 id="testing">Testing</h2><p>Now, for the fun part, is the stack <em>really</em> working?</p><p>For testing the stack, we'll need to enter the pio container and run some commands.</p><p>First, check the stack using:</p><pre><code>docker-compose -p TestEnv ps
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/PWCWiwE.png" class="kg-image" alt="docker-compose -p TestEnv ps"></figure><p>Enter the pio container, using the name assigned to it by your stack, in my case is:</p><pre><code>docker exec -it testenv_pio_1 bash
</code></pre><p>and then run <code>pio status</code>, you should see something like the following:</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/KGKeLqD.png" class="kg-image" alt="pio status"></figure><p>Everything looking good so far, now let's run the <strong>Universal Recommender Template</strong> (that we cloned previosly using the <strong>git submodule</strong> commands).</p><p>If you notice, there is a universal folder in the home directory when you access the pio container:</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/Cv4lRPN.png" class="kg-image" alt="universal dir"></figure><p>The universal directory was mounted on the container and it corresponds to the <strong>./docker_volumes/universal</strong> directory in the root of the repository (defined in the docker-compose.yml).  This is the same repository you pulled earlier while doing the git submodule commands.</p><p>To be able to run the examples, we need to install pip on the pio container.  But since the container runs with a nonroot user (ie.<strong>prediciton-io</strong>), well need to install pip in userspace.  This will allow us to install virtualenv using pip (in user space again), and then we will create a python virtual env with all the dependencies needed to run the tests.</p><p>Do the following inside the pio container:</p><pre><code>mkdir python
cd python
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py --user
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/1AbT1aE.png" class="kg-image" alt="installing pip user space"></figure><p>Once <strong>pip</strong> is installed in userspace, we can install the rest of the tools we need:</p><pre><code>~/.local/bin/pip install virtualenv --user
~/.local/bin/virtualenv prediction.io
source prediction.io/bin/activate
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/Y4vdBoP.png" class="kg-image" alt="installing needed tools with userspace pip"></figure><p>Now, while inside the python virtualenv, we can now test using the <strong>Universal Recommender Template</strong></p><p>Go to the universal directory</p><pre><code>cd ~/universal
</code></pre><p>However, <strong>before proceeding</strong>, we need to make one small modification to one file in the universal repo.  <strong>In another terminal</strong>, go to the root of the repo. Let's see the difference between the original file and the file we will replace it with.</p><pre><code>diff -c  docker_volumes/engine.json docker_volumes/universal/examples/handmade-engine.json
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/15j5WBW.png" class="kg-image" alt="diff engine.json"></figure><p>The only difference is the following line:</p><pre><code>"es.nodes"":"elasticsearch",
</code></pre><p>We are just specifying in the sparkConf the name of the elasticsearch nodes.</p><p>Just copy the provided file over the one in the submodule with:</p><pre><code> cp docker_volumes/engine.json  docker_volumes/universal/examples/handmade-engine.json
</code></pre><p>And now we can run the tests on the original console (the one with the python env).</p><pre><code>./examples/integration-test
</code></pre><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/2rshHYM.png" class="kg-image" alt="running integration tests"></figure><p><strong>Note:</strong>  The tests are quite taxing on your machine.  Make sure you have a decent system to run the tests, otherwise they might fail.   If you are having any problems running the tests, just run the integration-test script line by line, by copy pasting each line on the console.  That will make the test a little bit less taxing.</p><hr><p>That should be it.  Now you have a running <strong>prediction.io</strong> environment on your local machine.</p><p>Please share and comment and suggest what would you like to see dockerized or any DevOps recommendation that I might provide.</p><blockquote><em>note:  this article was previously published previously in 2016  in https://d.evops.pw</em></blockquote>]]></content:encoded></item></channel></rss>