ASP.NET Core, DevOps, DevTools, MVC

MSBuild… what? I just right-click on Publish

What is MSBuild? I had no idea. It has the word “build” in it, it must build something. Oh, I also remember that in the early stages of .NET Core there was a lot of discussion about project.json and the all DNX thing…. Then they decided to keep this MSBuild thingy. In the end we are all using Visual Studio and the only thing I needed to know is how to set up a Publish definition, right-click on publish and voila! My app was ready to ship. Untill…

While doing some DevOps work, I wanted to build the app only once but transform the config file multiple times based on the environment. So I started digging into this MSBuild. What is it? Here is Microsoft definition: MSBuild is the Microsoft Build Engine, a platform for building applications.
It really does a lot for our apps but all its work is hidden behind the “greatest GUI” of all time: Visual Studio.

I was going into this discovery with the idea that MSBuild would work like some of the other build process. Run a command with some flags and the build is done. Instead, the story is slightly different. It is a bit like the Angular CLI with schematics. You use schematics to define custom actions or re-define existing actions. In MSBuild you use a build file (xml format) to define the sequence of actions you need to do. That is what the .csproj are. Just build definitions. When you run MSBuild and point to a .sln file, it knows how to go through each .csproj and run the build processes as needed. Let’s see how it works and play with it.

Where is MSbuild in our machines?

The MSBuild command comes with Windows installation here: C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe based on the .NET Framework. But this version is kind of bare bone, it lucks a bunch of extensions you might need for application specific builds (like we.config transformation).

Visual Studio installation brings in a more complete version of it here: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe (actual location may vary based on your VS version and installation settings). This version includes a bunch of extensions saved in these folders:

First a bit of configuration:

Let’s start by making sure MSBuild work. Open PowerShell (or your command line of choice) and just run the MSBuild that comes with Windows with the -version flag:

As you can see all is good, MSBuild responded with its version. Now let’s see the version available with Visual Studio install:

Much newer version (obviously).

We will be using this version for the exercises below.

Hello World

This example comes from here here.

Go to your working directory of choice and create a new file named HelloWorld.build. I use VS Code but you can use your editor of choice. The content of this new file is:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0"  xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <Target Name="HelloWorld">
        <Message Text="Hello"></Message>
        <Message Text="World"></Message>
    </Target>
</Project>

MSBuild has 2 main concepts in executing instructions:

  • Target
  • Task

The first is a set of instructions/command to complete a larger unit of work. A task is the smallest unit of work, usually just one instruction. All instructions are wrapped in a Project tag. A target is called via a flag on the MSBuild invocation /t:TargetName (see below). In this case we just print on the console “Hello” and “World”. Let’s do it.

In PowerShell run this command:

& "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe" HelloWorld.build /t:HelloWorld

And:

How about flow control and variables?

But of course they are possible. A new variable is just a custom xml tag and condition can be build in a PropertyGroup tag. Create another build definition file and call it example2.build with this content:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0"  xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup Condition="'$(Name)' == ''">
        <OutMsg>Please let us know your name!</OutMsg>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Name)' != ''">
        <OutMsg>Welcome $(Name)!</OutMsg>
    </PropertyGroup>
    <Target Name="Condition">
            <Message Text="$(OutMsg)"></Message>
    </Target>
</Project>

Then run it with:

& "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe" example2.build /t:Condition /p:Name=""

And then assign your first name to Name:

& "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe" example2.build /t:Condition /p:Name="Emanuele"

You know already the result:

How about that web.config transformation?

In this case, we will use a task (small unit of work) and it is a preexisting task that comes with the Visual Studio installation. So we must import such task. Here is the build definition:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0"  xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Staging|AnyCPU'">
        <BuildConfig>Staging</BuildConfig>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">
        <BuildConfig>Release</BuildConfig>
    </PropertyGroup>
    <UsingTask TaskName="TransformXml" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v15.0\Web\Microsoft.Web.Publishing.Tasks.dll" />
    <Target Name="TransformWebConfig">
        <TransformXml Source="Configuration/Web.config"
                      Transform="Configuration/Web.$(BuildConfig).config"
                      Destination="Web.config"
                      StackTrace="true"/>
    </Target>
</Project>

I called this definition web_configs.build.

We then need the we.config files to be transformed. I created 3 files inside a folder called Configuration:

  • web.config
  • web.Staging.config
  • web.Release.config

Here is the xml in each file. Nothing new just regular web.config with environment transformations.

<!-- web.config -->
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <connectionStrings>
        <add name="entities" connectionString="Debug" />
    </connectionStrings>
</configuration>

<!-- web.Staging.config -->
<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <connectionStrings>
        <add name="entities" connectionString="Staging" xdt:Transform="SetAttributes" xdt:Locator="Match(name)" />
    </connectionStrings>
</configuration>

<!-- web.Release.config -->
<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <connectionStrings>
        <add name="entities" connectionString="Release" xdt:Transform="SetAttributes" xdt:Locator="Match(name)" />
    </connectionStrings>
</configuration>

Let’s now run this build definition with this command:

& "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe" web_configs.build /t:TransformWebConfig /p:Platform=AnyCPU /p:Configuration=Staging

As expected a new transformed web.config file is created in the working directory:

With this expected content:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <connectionStrings>
        <add name="entities" connectionString="Staging" />
    </connectionStrings>
</configuration>

You can run the Release version of the above command, just change the Configuration property value.

Well, I started this little journey a bit worry about the complexity of MSBuild, but once I understood a couple of the basic concepts, it wasn’t that hard to use. Keep in mind that you can do a lot with MSBuild, from copying files around, to run node, npm or other command line scripts.

I would love to hear some interesting use of MSBuild you have done. Feel free to leave a comment below.

DevQuest

Let’s change the culture and adopt a Code of Conduct

Thank you for a very informative evening with Rob Richardson — who is a great speaker. I learned a lot.  Los Angeles .NET Developers Group offers access to various concepts and prodigious educational speakers, which is why I have continued to look forward to these evenings going on over five years.

Within the last year, I have noticed that attendance to the meetings has trended down to stagnant 10-15 attendees out of 2,025 Dotnetters (members of this group).

There are few adjustments that could increase Dotnetters attendance and retention — while cultivating an inclusive environment that embraces quality learning.

 IMPLEMENT A CODE OF CONDUCT(samples here, here and here, probably best of all  StackOverflow ) — A well-written code of conduct clarifies an organization’s mission, values and principles, linking them with standards of professional conduct.  No action is needed — link to the code of conduct in the Los Angeles .NET Developer Group meetup page and make sure that all confirmation emails from meetup regarding group events feature a link to the group code of conduct page.  Expected Behavior and Unacceptable Behavior.  Unacceptable Behavior i.e. No interfering with a presentation (questions are allowed, answers on behalf of the speaker are not!)

ENCOURAGE OPEN COMMUNICATION:  Give everyone a chance at your meeting/presentation to stand up and introduce themselves and present a question and/or provide feedback at the END of the meeting.  Maybe provide cards or paper so they can remember their question. If participants do not want to wait for a Q&A at the end of the presentation then a skilled moderator might be the alternative to assist with this effort.

“OLD SCHOOL” DEVELOPERS:  It’s very important to have our “old school” developers share their knowledge and also encourage them to return, but their communication should be held until the end of the discussion.
When the attendee share their knowledge during the discussion, it tends to over-stage the presenter and presentation.  Sometimes “old school” developers can have an approach of “greater knowledge,” that can be intimidating for a new developer or someone that is trying to come to a group/community for networking and to feel part of common knowledge.  I attend several other groups and definitely do not feel this kind of intimidation.

PRESENTERS:  I have noticed that when the Los Angeles .NET Developer Group opened up the forum to first time speakers (“lighting talks”), the questions/comments were more “attacks” not questions. It is already hard to get on stage and speak in front of audience, but when the audience is so scrutinizing it becomes impossible. Instead of letting the speaker convey their point, attendees are focused on criticizing unimportant details to the presentation.  As a first time presenter, this can be disheartening; and will definitely make one never want to get on stage again and even worst not return to the group.

OPEN FORUM TIME:  Many of the members have a lot to say about the profession but, don’t have enough time to prepare or don’t have the language capability to speak on stage. So why don’t we have a 15 minutes “Rant” or “Roast” or “Exciting News” or “Favorite New Tech” or similar where any member can just stand up (or even keep on sitting) and say anything he/she wants about the dev profession or the tech industry or Microsoft or the .NET framework and so on. This might be time to open discussion and initiate a networking relationship between members.

OUTREACH: We need the “new blood” to attend and stay. Difficult to market due to a bad stigma on the Microsoft tech. So maybe joining forces with some other group in other technologies that might cross over could help cross-pollinate the groups. Like integrating the .NET world with front-end frameworks and maybe have a joined session with JS.LA or similar.  Since .NET code is now running everywhere, we might be able to do something with some linux groups out there.

So here it is…food for thoughts. Chime in!

Collaboration with Angela On The Move.

Tech

Migrate existing WordPress site to AWS Lightsail

As usual the plan is to save money, isn’t it?  But also to make the site more open to modifications as a good developer would like it to be.  We all know we like to tinker…

At the same time, my wife is starting a personal site/blog, so why not hosting all together and save some money.

I did some digging and noticed that AWS is now offering a more streamlined version of its cloud called Lightsail, their offering is really simple to use and has a number of VM images available for you to launch.  For as little as $5 per month you can have your own Ubuntu VM to run and manage.  Great!  Exactly what I wanted!

Before getting into the details of the migration, I must point out that the Ubuntu image available through Lightsail, is by Bitnami  which is a company that packages application for any platform and makes it available for different cloud provider.   In this case this is important as they have their own way of configuring the system (Apache, MySql, etc.), so if you think to just follow regular Ubuntu/Wordpress instructions, you will soon find yourself in trouble. I will list below all the links for Binami docs.

Lastly, this all process is done from a Windows user stand point. For Linux or Mac users some of these steps might be more obvious.

Ok let’s get started.  This is a bit of a step by step mostly for me, just in case I will need to re-do this process in the future.  So feel free to skip the steps you are already familiar with.

Start with Lightsail

First of all you will have to have a AWS account, then you can chose the VM that you would like to run, and I picked their WordPress version:

Instance selection

At this point you have a VM at your disposal with your public IP address and with a WordPress site up and running. I am not going into how to point your domain to this public IP address (DNS setting), you can find plenty of instructions online.  You can also move your domain to AWS (Route 53) if you would like, and Lightsail has settings to manage the DNS from Route 53.

Log into the machine

Now we will need to log into the new Ubuntu machine to work on some of the settings. Lightsail has a simple connect button that opens a terminal in the browser:

Or you can use the Windows Linux subsystem and open a bash terminal directly in your windows machine.  Or you can install PuTTY and use it to connect via SSL. In the end this was my choice as it abstracts away all the security ceremony to connect with a .pem file.  Here are some instructions on how to use the AWS .pem file to connect to your VM with PuTTY.

Now that you are in, what?  Well, first thing is to retrieve your wordpress password, so that you can log into your new WordPress site.  As you can see below in your home folder there is a file called “bitnami_application_password”, self explanatory, isn’t it?

Just use the cat command to read the file at the command line:

~$ cat bitnami_application_password

Ok now that you have your password, access the admin dashboard by adding “wp-admin” after your AWS Lightsail public ip address:

http://my-ip-address/wp-admin

Export and Import WordPress

At this point export all your posts and data from your old site to by clicking here:

Save this file on your PC and then by clicking Import in the same screen above but on your new WordPress instance, you will import all the post and data to your new WordPress Site.  Take a note of your theme and all your plugins as you will have to reinstall them in the new site. Plenty of instructions online.

Install a Second WordPress Site

Now the fun part is starting.  Luckily Bitnami has the possibility of installing multiple version of an application on the same machine.  You can download the module your are interested in here.   Then upload the module installer file to your machine via putty.  Here are the instructions and here is the key terminal command to be run from your Windows machine:

pscp c:\bitnami-wordpress-4.9.4-4-module-linux-x64-installer.run bitnami@your-public-ip-address:~/bitnami-wordpress-4.9.4-4-module-linux-x64-installer.run

With this you will be dropping the file in your server home folder.  Now you are ready to install a new Worpress site in your machine, follow these instructions, recapped here:

First make the installer such that you can run it:

$ sudo chmod a+x bitnami-wordpress-VERSION-module-linux-x64-installer.run

Then run it with a flag that names the new wordpress site:

$ sudo ./bitnami-wordpress-VERSION-module-linux-x64-installer.run --wordpress_instance_name NEW_BLOG_NAME

Go through the wizard and answer all the questions:

At the end of this process you will find an additional folder in your apps folder with the new WordPress site. In my case “wordpress” is my site and “angelawp” is my wife’s:

There is also a good readme here, that walks through the Bitnami way for the wordpress stack.

Configure Apache

Now it is time to configure Apache to support 2 websites on the same IP address (obviously at this point, you will have already taken care of DNS and pointed both domains to this public IP address).

Apache handles this via its Virtual Host configuration (here is a good walk through) , but Bitnami handles Apache configuration in its own way.  It uses a number of configuration files that import each other based on the Bitnami apps structure within the machine. The instruction below can be found here.

The Binami wordpress configuration, in case of multiple wordpress site in the same machine,  is in “prefix” mode.  Meaning that each site is accessed by http://example.com/wordpress1 and http://example.com/worpress2.  We need to change this as we actually have 2 separate domains that must point to the correct worpress site and this is done via virtual host.

Each Bitnami wordpress site has a prefix configuration file at /opt/bitnami/apps/MYSITE/conf/httpd-prefix.conf and a virtual host file at /opt/bitnami/apps/MYSITE/conf/httpd-vhosts.conf . In turns out both files are then called in by way of “Include” from the main Apache config files:

Prefix: /opt/bitnami/apache2/conf/bitnami/bitnami-apps-prefix.conf

VirtualHost: /opt/bitnami/apache2/conf/bitnami/bitnami-apps-vhosts.conf

Here is what we need to modify:

1 – Delete the following line in the /opt/bitnami/apache2/conf/bitnami/bitnami-apps-prefix.conf file:

Include "/opt/bitnami/apps/MYSITE/conf/httpd-prefix.conf"

2 – Add a new link in the /opt/bitnami/apache2/conf/bitnami/bitnami-apps-vhosts.conf file:

Include "/opt/bitnami/apps/MYSITE/conf/httpd-vhosts.conf"

3 – Modify the httpd-vhosts.conf as follows. I like to use nano, it seems more intuitive then Vim, so open the file with this:

$ nano /opt/bitnami/apps/wordpress1/conf/httpd-vhosts.conf

then change it with this code:

<VirtualHost *:80>
  ServerName website1.com
  ServerAlias www.website1.com
  DocumentRoom "/opt/bitnami/apps/mysite1/htdocs"

  Include "/opt/bitnami/apps/mysite1/conf/httpd-app.conf"
</VirtualHost>

Now do the same for website2:

$ nano /opt/bitnami/apps/wordpress2/conf/httpd-vhosts.conf

then change it with this code:

<VirtualHost *:80>
  ServerName website2.com
  ServerAlias www.website2.com
  DocumentRoom "/opt/bitnami/apps/mysite2/htdocs"

  Include "/opt/bitnami/apps/mysite2/conf/httpd-app.conf"
</VirtualHost>

Now you need to restart Apache:

$ sudo /opt/bitnami/ctlscript.sh restart apache

If all is good it will restart and you should be able to access your websites independently: website1.com and website2.com

Issue with Preview Post

What has happened is that when writing a post and trying to preview it…

…I was getting a 500 status code.  After long searching I found this post and solved the problem with this. I changed the AllowOverride  to All from None in this file:  /opt/bitnami/apps/wordpress1/conf/httpd-app.conf

<Directory "/opt/bitnami/apps/wordpress/htdocs">
    Options +MultiViews +FollowSymLinks
    AllowOverride All

Add SSL Certificate

Now we need to make our site secure with https and a SSL certificate. I did some research and I decided to stay open source and use Let’s Encrypt.

It turns out that Bitnami already thought about this and has a tutorial on how to implement Let’s Encrypt in their apps. Here is the link.  As I did before I will recap the procedure here:

1 – First you need to install the Lego Client in the server.  Lego client is a Let’s Encrypt client written in Go.  It generates a certificate based on Let’s Encrypt protocol and let’s you renew such certificate when needed (every 90 days, as Let’s Encrypt certs are issued with a 90 days expiration).

Log into the server as the bitnami user and run this commands:

$ cd /tmp
$ curl -s https://api.github.com/repos/xenolf/lego/releases/latest | grep browser_download_url | grep linux_amd64 | cut -d '"' -f 4 | wget -i -
$ tar xf lego_linux_amd64.tar.xz
$ sudo mv lego_linux_amd64 /usr/local/bin/lego

These steps will download, extract and copy the Lego client to a directory in your path.

2 – Let’s generate the certificate. To do so you need to make sure your domain is pointing to the public IP address of this server.

Turn off all Bitnami services:

$ sudo /opt/bitnami/ctlscript.sh stop

Request a new certificate for your domain as below. Remember to replace the DOMAIN placeholder with your actual domain name, and the EMAIL-ADDRESS placeholder with your email address.  Additionally, we will be requesting a certificate for the DOMAIN and the www.DOMAIN version of it. This is done by adding the –domains flag as many time as the number of sub-domains you would like to add to the certificate.

$ sudo lego --email="EMAIL-ADDRESS" --domains="DOMAIN" --domains="www.DOMAIN" --path="/etc/lego" run

A set of certificates will now be generated in the /etc/lego/certificates directory. This set includes the server certificate file DOMAIN.crt and the server certificate key file DOMAIN.key.

3 – Configure Apache to use these certificates. We will created a linked file to the certificate and the key file in the /opt/bitnami/apps/MYSITE/conf/certs directory and we will give them root permission:

$ sudo ln -s /etc/lego/certificates/DOMAIN.key /opt/bitnami/apps/MYSITE/conf/certs/server.key
$ sudo ln -s /etc/lego/certificates/www.DOMAIN.key /opt/bitnami/apps/MYSITE/conf/certs/www.server.key
$ sudo ln -s /etc/lego/certificates/DOMAIN.crt /opt/bitnami/apps/MYSITE/conf/certs/server.crt
$ sudo ln -s /etc/lego/certificates/www.DOMAIN.crt /opt/bitnami/apps/MYSITE/conf/certs/www.server.crt
$ sudo chown root:root /opt/bitnami/apps/MYSITE/conf/certs/server*
sudo chmod 600 /opt/bitnami/apps/MYSITE/conf/certs/server*

Obviously we are doing it also for the www.DOMAIN cert and key.

4 – Renew the certificate: Let’s Encrypt certificates are only valid for 90 days. To renew the certificate before it expires, we will write a script to perform the renew tasks and schedule a cron job to run the script periodically.

Create a script at /etc/lego/renew-certificate.sh with the following content.  Remember to replace the DOMAIN placeholder with your actual domain name, and the EMAIL-ADDRESS placeholder with your email address.

#!/bin/bash

sudo /opt/bitnami/ctlscript.sh stop apache
sudo /usr/local/bin/lego --email="EMAIL-ADDRESS" --domains="DOMAIN" --domains="www.DOMAIN" --path="/etc/lego" renew
sudo /opt/bitnami/ctlscript.sh start apache

Make the script executable:

$ chmod +x /etc/lego/renew-certificate.sh

Execute the following command to open the crontab editor:

$ sudo crontab -e

Add the following lines to the crontab file and save it:

0 0 1 * * /etc/lego/renew-certificate.sh 2> /dev/null

5 – Configure Apache  Virtual-Hosts to work with https.  Open the /opt/bitnami/apps/MYSITE/conf/httpd-vhosts.conf with nano and modify it as follows:

<VirtualHost *:80>
  ServerName DOMAIN.com
  ServerAlias www.DOMAIN.com
  DocumentRoot "/opt/bitnami/apps/MYSITE/htdocs"

  RewriteEngine On
  RewriteCond %{HTTPS} !=on
  RewriteRule ^/(.*) https://%{SERVER_NAME}/$1 [R,L]

  Include "/opt/bitnami/apps/MYSITE/conf/httpd-app.conf"
</VirtualHost>

<VirtualHost *:443>
  ServerName DOMAIN.com
  DocumentRoot "/opt/bitnami/apps/MYSITE/htdocs"
  SSLEngine on
  SSLCertificateFile "/opt/bitnami/apps/MYSITE/conf/certs/server.crt"
  SSLCertificateKeyFile "/opt/bitnami/apps/MYSITE/conf/certs/server.key"
  Include "/opt/bitnami/apps/MYSITE/conf/httpd-app.conf"
</VirtualHost>

<VirtualHost *:443>
  ServerName www.DOMAIN.com
  DocumentRoot "/opt/bitnami/apps/MYSITE/htdocs" 
  SSLEngine on 
  SSLCertificateFile "/opt/bitnami/apps/MYSITE/conf/certs/www.server.crt" 
  SSLCertificateKeyFile "/opt/bitnami/apps/MYSITE/conf/certs/www.server.key"

  Include "/opt/bitnami/apps/MYSITE/conf/httpd-app.conf" 
</VirtualHost>

Note that I added a redirect rule to push all http requests to be https. We also have 2 VirtualHosts for port 443. One for the regular DOMAIN with its certificates and another for the www.DOMAIN with its certificates.

6 – Configuration is done let’s restart the bitnami services:

$ sudo /opt/bitnami/ctlscript.sh start

The above process for the SSL certificate must be repeated for the second site on this same machine (my wife site).

One last thing before closing this long blog. It might happen that the second site cannot load images. If that is the case follow these instructions.  We basically need to update the database with the second site domain. Run these commands:

$ sudo mysql -u root -p -e "USE bitnami_MYSITE2; UPDATE wp_options SET option_value='http://DOMAIN2/' WHERE option_name='siteurl' OR option_name='home';"

MYSITE2 is the name of the second site you installed with the Bitnami procedure at the top of this blog and DOMAIN2 is the domain of your second site (wife).

And this is all for this migration. I hope you will find this helpful.

ASP.NET Core, MVC

From WCF to Secure ASP.NET Core

Here is the challenge: securing a system of WCF services with modern OAuth and OpenIDConnect.

The entire business logic of this solution is handled and served by WCFs.  Nothing wrong with it, but the security practices were a bit outdated and not up to standard.

Considering that years of features and logic were coded in those WCFs, the budget was limited and the need to upgrade to a secure and modern system became quickly very urgent, a rewrite of the entire solution was definitely not an option.  Therefore, I decided to place all the WCFs end-point behind a ASP.NET Core proxy that leverages the Identity Server 4 framework for authentication.

Another challenge I faced was that the entire solution was in VB and I am mainly a C# developer; additionally this is a VB shop that was a little reluctant in moving to C#.  But after long pondering and knowing that in the end VB and C# compile down to the same Microsoft Intermediate Lanaguage (MSIL) we decided to code this proxy in C# and eventually start porting some of these WCF functionality into a VB .NET Standard Class Library that plays well with C# projects anyway.

After doing some research I found several sources of information that I am sharing below. Definitely the most important and inspiring one was this video and blog of Shayne Boyer called ASP.NET Core: Getting clean with SOAPThese other sources might be helpful as well:

– Here is a way to securing existing WCFs and keep open the door for more modern WebApi systems. The problem is that is done with Identity Server 3 (older version): https://leastprivilege.com/2015/07/02/give-your-wcf-security-architecture-a-makeover-with-identityserver3/ Check also the link to Github samples.

– Here is an article on CORS for WCF:

https://blogs.msdn.microsoft.com/carlosfigueira/2012/05/14/implementing-cors-support-in-wcf/

The solution in question has a few different clients:  Javascript client (browser dashboard apps), mobile apps and server client.  IdentityServer 4 has a configuration for each of these case scenarios (or flows) and their Quickstart samples give you a very good idea on how to implement them.

The key here is the creation of the WCF client that from the ASP.NET Core WebApi calls the WCF services.  Shayne uses the “svcutil” (ServiceModel Metadata Utility Tool) to automatically generate the WCF client code.  But instead I decided to use this great Visual Studio extension that does just that but makes it easier to instantiate and use a WCF client. The extension is called  Microsoft WCF Web Service Reference Provider , once installed you just go to “Connected Services” in your solution explorer, input the endpoint url and the tool creates all the code you need.  Also, just recently this tool has been included in Visual Studio 2017 (version 15.5 and above), thus no need to install this extension anymore. One thing I must mention about this tool is that it work only with a SOAP endpoints, if the WCF has other type of endpoints (like REST) the tool will not be able to read the service metadata end will return an error. You can always add a SOAP endpoint to your WCF (hopefully you have access to it).

The extension creates a client that you can instantiate by passing the “EndpointConfiguration” type and use it to call each method exposed by the WCF (by default async). Here is how:

WCFclient client = new WCFclient(EndpointConfiguration.soap);
var result = await client.MyMethodAsync(args);

Now, one of the things that we need to make sure to do is to close and dispose of the WCF Client at the end of each request.  And here the ASP.NET Core Dependency Injection comes in handy.

The ASP.NET Core DI call the Dispose() method of the injected service if it implements the  IDisposable interface.  Our WCF client created with either the VS extension or the svcutil.exe utility does not implement IDisposable.  So we will wrap the client in a class and implement the interface ourselves. The WCF Client do expose the Close() and Abort() methods needed to dispose the service client (see below).

Be careful as the ASP.NET docs note the followings:

// container will create the instance(s) of these types and will dispose them
services.AddScoped<Service1>();
services.AddSingleton<Service2>();

// container did not create instance so it will NOT dispose it
services.AddSingleton<Service3>(new Service3());
services.AddSingleton(new Service3());

So if the DI creates the instance it will dispose of it (if it implements IDisposable) but if it doesn’t it will not dispose of it even if it implements IDisposable. So do not pass in an instantiated object as an argument, just let DI do the magic.

In wrapping the WCF client we want to maintain the possibility of changing the EndpointConfiguration as well as other configurations we might want to pass in in the future (like a different URL to point to if necessary).  Thus we inherit from the IWCFClient (the WCF interface implemented by the WCF and its client which is brought in by the VS extension above) and IDisposable and we pass in the configuration directly into “base” by way of the contructor:

public class WCFClientBySoap : IWCFClient, IDisposable
{
    public WCFClientBySoap() : base(EndpointConfiguration.soap)
    {
    }

    private bool disposedValue = false; 

    protected async virtual void Dispose(bool disposing)
    {
        if (!disposedValue)
        {
            if (disposing)
            {
                try
                {
                    if (State != System.ServiceModel.CommunicationState.Faulted)
                    {
                        await CloseAsync();
                    }
                }
                finally
                {
                    if (State != System.ServiceModel.CommunicationState.Closed)
                    {
                        Abort();
                    }
                }
            }

            disposedValue = true;
        }
    }

    public void Dispose()
    {
        Dispose(true);
    }
}

Here is a blog that talk about disposing of a WCF Client WCF Client Closal and Disposal

Now we just need to register this wrapper with the ASP.NET Core DI container.  Because we want to be able to change the wrapper in case of different configurations, we register the wrapper as a concrete class of the WCF interface.  In Startup.cs we add this line in the ConfigurationServices method:

services.AddScoped<IWCFClient, WCFClientBySoap>();

Finally we create a controller/action endpoint for each WCF endpoint leveraging the injected client:

[Produces("application/json")]
[Route("mycontroller")]
public class MyControllerController : Controller
{
    private IWCFClient _wcfclient;

    public MyControllerController(IWCFClient wcfclient)
    {
        _wcfclient = wcfclient;

    }

    [HttpGet("getMyData")]
    public async Task<IActionResult> GetMyData()
    {
        try
        {
            var result = await _wcfclient.getMyDataAsync();
            if (result == null)
            {
                return BadRequest("Couldn't get Data");
            }
            return new ObjectResult(result);
         }
         catch (Exception x)
         {
             return BadRequest(x.Message);
         }
     }
}

And this is all. It just works.

 

CSS

Bootstrap border not lining up

Here is the problem:  group input in Bootstrap with button have the lower border not lining up:

Bootstrap border not lined up
Group input with button. Border not lining up.

So I started investigating and several posts pointed to the type or browser that maybe isn’t rendering correctly, like this one.  Luckily at the very bottom of it there was the hint to the right solution.  The number precision of the sass compiler is what creates different line height for the input and button. And that was it.  I am using the sass version of bootstrap but the other obstacle was that I am using Webpack to compile sass with the sass-loader.

There are all sort of posts on how to change the sass compiler number precision, but very little to do so with webpack. Finally this post suggests to add a query parameters to the sass-loader config, and boom, it worked:

With adjust number precision border do line up

As suggested in this post a good number precision is 10: 'sass-loader?precision=10' .  Here is how I changed the webpack config file:

module: {
        rules: [
            {
                test: /\.scss$/,
                use: extractCSS.extract({
                    use: ['css-loader', 'sass-loader?precision=10'],
                    fallback: 'style-loader'
                })
            }
        ]
    }

 

 

ASP.NET Core

ASP.NET Core 2 is here! Semi painless upgrade.

On August 14, 2017 the ASP.NET team announced the release of ASP.NET Core 2 (along with .NET Standard 2 and .NET Core 2).

As you might have noticed everybody jumped in and upgraded their projects or at least their pet/test projects to v2.0; naturally, I felt the same urge and I did so for my current project.  So this post is to chronicle my fairly painless upgrade with a little hiccup mainly due to the limited documentation.  Keep in mind that Core 2.0 is still in preview even though is a final preview and quite stable.

You can find all sorts of blogs and instruction on how to upgrade, like here or here  and here.  Following all this information I worked my way to 2.0 as follows.

Keep in mind that this is “my” journey and yours might be different. I updated a Web application on a Windows machine with Visual Studio 2017 Community edition.

1. Install .NET Core 2.0 SDK

You can find it here.  The process is very simple just follow the installer instructions.

2. Install Visual Studio 2017 preview version 15.3

You can find it here, actually it wasn’t as clear to me that this was still a preview version and I spent quite a bit of time trying to update my current version of VS 2017 to 15.3 without success (obviously!).  So, Core 2.0 is available with VS2017 in preview only.

Go ahead and install it, it can live side-by-side with your stable version of VS with no problems.

This new version of Visual Studio can automatically detect if a new SDK is available (you just install it in step 1) and will offer you the ability to target the new framework.  You can read more about it here.

3. Install the Microsoft.AspNetCore.All meta package

You can do so by way of Nuget with the dependency management tool offered by VS 2017:

image

Your project probably have several AspNet Core packages but after installing Microsoft.AspNetCore.All you can remove most of them as they are already included in the AspNet.All meta package.  You can find a list of the included packages here.

4. Target .Net Core 2 in all the projects in your solution

This is pretty easy, right click on the project and select properties, in the application dialog select ASP.NET Core 2 as target. This is for the web projects, if you have any other data or business project to support the web app, they should target .NET Standard 2.0:

Target Core 2.0.NET Standard 2.0

At this point all is set, rebuild the project and run it….. but there is always something not exactly right.  Here is what I found:

Content Error

After the steps taken above the first error was the following:

Duplicate ‘Content’ items were included. The .NET SDK includes ‘Content’ items from your project directory by default. You can either remove these items from your project file, or set the ‘EnableDefaultContentItems’ property to ‘false’ if you want to explicitly include them in your project file. For more information, see https://aka.ms/sdkimplicititems. The duplicate items were: ‘wwwroot\img\anonimus.png’

So I checked the .csproj file and it was listing several files and folders part of the content directory (wwwroot):

I thought that this was odd.  The project was created with VS 2017 which automatically detects files in your project folder, so there is no need to actually list them in the .csproj file.  Anyway, I deleted those entry, and the error above was gone.  Great!  The first one is done.

Identity Error (Cookies)

The next series of errors were all due to the cookies settings I had under Identity.  Here is what the error looked like:

IdentityCookies Error
Cookies does not exist under IdentityOptions

I spent quite a bit of time to figure this out.  Everything pointed to this article in the docs, but as the title says, it let you configure cookies in the case where you do not have Identity set up, instead I did have Identity set up.  So I went into the docs to see how to set up Identity with cookies, but that is old code that doesn’t work anymore with Core 2.0.  In fact the previous article about cookies without Identity is explicitly saying that cookies have been removed from Identity.

Finally this Stackoverflow answer pointed me to this Github announcement  from the ASP.NET team which in turns pointed me in the right direction here.  And finally I found the solution.  You can read the details in the post linked above, but now cookies has its own configuration extension to be placed in the ConfigureService method of the Startup class:

services.ConfigureApplicationCookie(conf =>
{
  conf.LoginPath = "/Auth/Login";
  conf.Events = new CookieAuthenticationEvents()
  {
    OnRedirectToLogin = async ctx =>
    {
      if (ctx.Request.Path.StartsWithSegments("/api") && ctx.Response.StatusCode == 200)
      {
        ctx.Response.StatusCode = 401;
      }
      else
      {
        ctx.Response.Redirect(ctx.RedirectUri);
      }
      await Task.Yield();
    }
  };
});

For the rest of the Identity configuration (pasword, etc.) I moved it under Configue<IdentityService>:

services.Configure<IdentityOptions>(config =>
{
  config.User.RequireUniqueEmail = true;
  config.Password.RequiredLength = 8;
});

Now all works perfectly as before.

One last piece of the puzzle is UseIdentity()  in the Configure() method which is deprecated, as it was just calling UseCookie 4 times (as noted on the ASP.NET team post). Instead use UseAuthentication()  here is my Startup class (I removed unrelated code):

public class Startup
    {
        //Removed for brevity

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddIdentity<CCMUser, IdentityRole>()
            .AddEntityFrameworkStores<CCMContext>()
            .AddDefaultTokenProviders();

            services.AddMvc()
            .AddJsonOptions(opt =>
            {
                opt.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
            });

            services.ConfigureApplicationCookie(conf =>
            {
                conf.LoginPath = "/Auth/Login";
                conf.Events = new CookieAuthenticationEvents()
                {
                    OnRedirectToLogin = async ctx =>
                    {
                        if (ctx.Request.Path.StartsWithSegments("/api") && ctx.Response.StatusCode == 200)
                        {
                            ctx.Response.StatusCode = 401;
                        }
                        else
                        {
                            ctx.Response.Redirect(ctx.RedirectUri);
                        }
                        await Task.Yield();
                    }
                };
            });

            services.Configure<IdentityOptions>(config =>
            {
                config.User.RequireUniqueEmail = true;
                config.Password.RequiredLength = 8;
            });
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, CCMSeedData seeder, IHttpContextAccessor httpContext)
        {
            //Removed for brevity

            app.UseStaticFiles();

            app.UseAuthentication();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");

                routes.MapSpaFallbackRoute(
                    name: "spa-fallback",
                    defaults: new { controller = "Home", action = "Index" });
            });
        }
    }
}

And this is it. I hope this helps someone else like me that had a bit of hard times fixing the cookies setting issue with Identity.

MVC

Return URL with fragment in ASP.NET Core MVC

I am working on a web project based on Asp.Net MVC and Aurelia and I decided to structure it with part MVC and part Spa (Aurelia). The project also implements a basic authentication system (Asp.Net Core Identity) where if a non-authenticated user is trying to access a secure page it is redirected to the login page, which is standard procedure in these cases.

As you can see the return URL sent in with the query string includes a fragment, used by the Aurelia routing (or any spa framework routing you are using):

http://localhost:14500/Auth/Login?ReturnUrl=%2Fcamp%2F7#sessions

The problem is that the fragment  portion of the URL is never sent to the server and it is therefore ignored. So here is what happens:

The fragment of the URL is not being added to the action url of the form, so when you post the login form the server redirects to a URL without fragment and you are not able to get to the correct page handled by the spa router.

So, I included a little Javascript functions that picks up the return URL from the browser’s location.href and updates the form action with the correct return URL:

$(document).ready(function () {
        var form = $("form")[0];
        var hash = document.location.hash;
        if (hash) {
            if (form) {
                if (form.action) {
                    form.action = document.location.href;
                }
            }
        }
    });

Here is how the form action looks like after this code runs:

Now, when the user logs in and the server redirect to the originally requested URL, Aurelia router picks it up and renders the correct page.

DevQuest

JavaScript Framework Syndrome

Learn to let go…and avoid the “fatigue”.

The other day I attended the local Elastic User Group meetup and one of the presentations was about the direction that the new Kibana UI Framework is taking.  CJ Cenizal, an excellent presenter, introduced us to the new UI framework on which all Kibana development will be based upon.  The main point was that the company will now move from AngularJS to React; reason being the idea of making the Kibana UI fully componentized.  Despite the fact that Angular 2 is the natural progression from AngularJS and it is completely component based, they still opted in favor of React.  One other important reason, CJ gave us, was that the core team of developers were very comfortable with React and somehow less comfortable with Angular2, which in my opinion made total sense.

As you can imagine, the questions from the audience kept on coming: “…why React when Angular 2 is also component based?”, “…we like React, but we think the ecosystem of Angular 2 is larger/stronger and would work better for Kibana…”, etc.  CJ did a great job in answering all those questions and more, but the more he was making the point for React the more the audience was questioning their decision.  I was getting irritated about it; I am not a particular fan of React nor Angular 2 per se, but I was respecting and understanding CJ and his team decision.  At that point it dawned on me that the guests where not trying to question the decision, they where trying to find within themselves the  answer to the ultimate question: “What JavaScript Framework should I use for my projects?”.

If you pay attention to the community and read enough blog posts you realize that the “masters” out there always respond to the above questions with something like: “…it depends.  Use the right tool for the right job…”.  But we do not like to hear that as it requires a lot of prep work before actually building something, we want a straight answer that we can apply all the time to everything (or almost everything).  I want an expert that tells me: “Considering all options, pros and cons, just use Angular 2 (or Aurelia or React or Vue or …)”.

I know that the audience was just looking for the final answer;  they were just trying to finally breath and stop evaluating all scenarios; but mostly they just want to code and PRODUCE.   They wanted a “resolve”.  Not that I am immune from this JavaScript fatigue, but somehow the above realization immensely helped me to calm down; and just trust that the framework decisions, taken at that moment, is the correct one for the time being.

I hope this “2 minutes with the shrink” will help you finding piece in this quest. Please leave a comment and let me know.

Tech

Who invented the First PC?

Olivetti, Programma 101

Since I was a kid I have been fascinated by the origin of things and/or where things come from.  A few months ago I came across a Facebook posting, by an Italian friend of mine, with a link to a newspaper article titled: “A 50 anni dalla nascita, Renzi incontra gli inventori della Olivetti, Programma 101” [Translations: “At 50 years from its birth, the Italian prime minister (Mr. Matteo Renzi), meets the inventors of the Programma 101”]
I grew up in an Italian city (Biella) very close to Ivrea which hosts the Olivetti HQ, so naturally I was curious about this article and its content.  But when I read it I became obsessed with the details of how and why this product came about and eventually disappeared.

What is the “Programma 101”

So what is this thing? It literally is a programmable calculator, which in modern terms it is called Computer.  It is important to remember what a computer really is, as naturally today we think of screens, touch screens, phones, iPads, etc.  But the core is the actual “computer” = programmable calculator.

The definition of the word “computer” given by dictionary.com is:

a programmable electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations.”

In this case the Programma 101 had a keyboard for input and a paper roll printer for output, we feed her data, she makes the calculation and spits out the results on paper.  So it is a Computer, as per the above definition.

Why was it revolutionary?

To give you a little bit of a historical context, in the 50/60s the only computers around were the very, very large mainframe that we  saw in the movies.  In the United States, IBM created the SAGE (Semi-Automatic Ground Environment) this huge mega computer that was operational between 1958 and 1966.  Look at the numbers!!

sage

On the other side of the ocean Olivetti, already one of the world leaders in type writes and calculators, decided to challenge the US IBM computer hegemony, by investing lots of money in new technologies and eventually build the Elea 9000 (Elaboratore Elettronico Aritmetico [Arithmetical Electronic Computer], name then changed in Elaboratore Elettronico Automatico for marketing reasons) launched in 1957 .

In both cases these huge machines were mainly used by the government, the military, universities and very large corporations for their internal calculation (taxes, payroll, financial, automation, etc. ).  Regular folks were actually scared of these large machine that could manage such enormous amount of data, it was at this time that rumors about computers taking over the world started, giving eventually way to movies like “2001 Space Odyssey”,  “War Games”, “Terminator”, etc.

In 1960

In 1960 something important happened: the CEO and president of Olivetti, Mr. Adriano Olivetti passed away and his son Roberto took over the company.  With the intent of following into his father foot steps and keep on moving even further into the future, he commissioned a small group of engineers to build a new type of computer something that was never been built or even imagined before.

The Team

P101-team

Perotto wanted to build a product small enough to be on the desk of any office, easy enough that a secretary could program it and powerful enough to compute very quickly.  So they embark in this journey with some major challenges to overcome at that time.

Memory Miniaturization

The first challenge in building something like this was the memory; the transistor barely started to enter the marker, so you understand that the miniaturization technology was at the very early stages.   At that time the existing memory was huge (1/2 a cubic yard dimension for few bytes of memory), extremely expensive hence it would not work for a desk top appliance.

They come up with a type of Magnetostrictive Delay Line memory which is some sort of steel wire coiled around into a circuit. It was pretty much as big as today’s motherboards. It was a refreshable memory but its access was sequential, like a magnetic tape.

MemoryMiniaturization

The total memory of the Programma 101 was 120 bytes.

DeleyLineMemory

Storage

Once you typed in the program then there was no place to store it, so that it could be used again after the machine was turned off.  They thought of using a magnetic strip on each side of a cardstock card to store to programs.  This to me was genius!  They actually paved the way for what eventually became the floppy disk.

MagneticCards

If you think that the internal memory of the Programma 101 had 120 Bytes and these cards could store 1 maybe 2 programs we are talking about max 240 bytes.  Compare it to today PC storage space …

Language

They needed a very simple programming language.  So they used a combination of letters and operators that could be concatenated into a series of calculation.  It is actually not as simple as it could be but compared to the languages used for the big mainframe of that time, this was a breeze.  It was similar to Assembly, where you had to control where the data is stored how to move it, etc.  Here is an example:

ProgrammingLanguage

Design

Design was always a key aspect of a machine creation for Olivetti since the early days. Their products won several design awards throughout Olivetti history.

MarioBellini

Once the prototype was ready and functional Olivetti commissioned Mario Bellini to work on the shape and design of the machine.  Bellini is a renown architect of international fame, he received 8 times the Golden Compass Award and 25 of his works are part of the New York MoMA permanent collection

And he transformed the prototype into a wonderful final product:

PrototypeToFinal

Marketing

The marketing campaign was very futuristic and cutting edge for the time, and in looking at some of the brochure used to promote the product I could not avoid to think: “Did Apple steal the Programma 101 marketing design concepts?”… Probably not since the Programma 101 looked much prettier than the Apple II.

Marketing

The Final Product

  • In 1964 Olivetti presented the Programma 101 to the New York world fair and it was an instant success.
  • It was launched in the market (mainly the US) in 1965.
  • It sold about 44,000 units.
  • And was priced at $3,200 (about $24,000 today, based on the US Inflation Calculator)

After the Programma 101 the next serious attempts to produce a personal computer or home computer were in the late 70s with Apple I (1976), early version of the Commodore (1977), and the big revolution of the early 80s with the x86 architecture of the Intel microprocessor and the popularization of the IBM PC and all its clones.  Along came different operating systems such as MS DOS, Windows and the Apple OS.  All this 10-15 years after the Programma 101 was introduced to the markets.  It definitely was a product ahead of its time, Wikipedia considers it the first PC.

A few fun facts

  • Although it didn’t have a CPU, its computational ability and dimensions were so extraordinary that NASA bought 10 units to run the Apollo 11 landing on the moon mission.
  • The US Air Force used it to compute coordinates for ground directed bombing of B-52 targets during the Vietnam War. (not really a “fun” fact)
  • In 1968  HP developed the HP 9100 which was basically a copy of to the Programma 101, to the point the HP had to pay $900,000 in royalties to Olivetti (about $6.2M in today’s terms).

Links:

EFCore

EF Core Audit Trail

Entities changes audit

A few months back I was interviewing for a full stack position and the hiring company CTO asked me to work on a little exercise: “Build an Audit Trail for Entities Changes”.  The purpose of such exercise was to create an automatic system that would log entities changes in the database. After completing the exercise I thought sharing it could help someone else in need of such tool and me to remember it. So here it is, check my Github repo.

Using .NET Core and EntityFramework Core 1.0.0 (this was done a few months back with v. 1.0.0 now there are already newer versions) I created a little console app to seed the DB, randomly change the records and list the log.

The way I solved this problem was inspired by a Julie Lerman Pluralsight course (can’t remember which one) and a nice post by Matthew P Jones, it seems that overriding the SaveChanges method of the DbContex is probably the best way to go.  In Julie course she has all her entities inherit from an interface that expose Id, DateModified, DateCreated and User and she automatically log this information within the entityobject.  By overriding the SaveChanges method we have access to all the properties that are being affected along with the OriginalValue and NewValues so we can log each property change accordingly (see AuditContext class).