Scott Hanselman's Blog

Scott Hanselman's Blog

Description

Scott Hanselman on Programming, User Experience, The Zen of Computers and Life in General

Link: www.hanselman.com/blog/

Episodes

Relationship Hacks: Playing video games and having hobbies while avoiding resentment

Jan 7, 2019

Description:

Super Nintendo Controller from PexelsI'm going to try to finished my Relationship Hacks book in 2019. I've been sitting on it too long. I'm going to try to use blog posts to spur myself into action.

A number of people asked me what projects, what code, what open source I did over the long holiday. ZERO. I did squat. I played video games, in fact. A bunch of them. I felt a little guilty then I got over it.

The Fun of Finishing - Exploring old games with Xbox Backwards Compatibility

I'm not a big gamer but I like a good story. I do single player with a plot. I consider a well-written video game to be up there with a good book or a great movie. I like a narrative and a beginning and end. Since it was the holidays, it did require some thought to play games.

When you're in a mixed relationship (a geek/techie and a non-techie) you need to be respectful of your partner's expectations. The idea of burning 4-6 hours playing games likely doesn't match up with your partner's idea of a good time. That's where communication comes in. We've found this simple system useful. It's non-gendered and should work for all types of relationships.

My spouse and I sat down at the beginning of our holiday vacation and asked each other "What do you hope to get out of this time?" Setting expectations up front avoids quiet resentment building later. She had a list of to-dos and projects, I wanted to veg.

Sitting around all day (staycation) is valid, as is using the time to take care of business (TCB). We set expectations up front to avoid conflict. We effectively scheduled my veg time so it was planned and accepted and it was *ok and guilt-free*

We've all seen the trope of the gamer hyper-focused on their video game while the resentful partner looks on. My spouse and I want to avoid that - so we do. If she knows I want to immerse myself in a game, a simple heads up goes a LONG way. We sit together, she reads, I play.

It's important to not sneak these times up on your partner. "I was planning on playing all night" can butt up against "I was hoping we'd spend time together." Boom, conflict and quiet resentment can start. Instead, a modicum of planning. A simple headsup and balance helps.

I ended up playing about 2-3 days a week, from around 8-9pm to 2am (so a REAL significant amount of time) while we hung out on the other 4-5 days. My time was after the kids were down. My wife was happy to see me get to play (and finish!) games I'd had for years.

Also, the recognition from my spouse that while she doesn't personal value my gaming time - she values that *I* value it. Avoid belittling or diminishing your partner's hobby. If you do, you'll find yourself pushing (or being pushed) away.

One day perhaps I'll get her hooked on a great game and one day I'll enjoy a Hallmark movie. Or not. ;) But for now, we enjoy knowing and respecting that we each enjoy (and sometimes share) our hobbies. End of thread.

If you enjoy my wife's thinking, check her out on my podcast The Return of Mo. My wife and I also did a full podcast with audio over our Cancer Year  Mo and Scott share their thoughts and struggle in this cancer diary they started the day after Mo was diagnosed.

Hope you find this helpful.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Using Visual Studio Code to program Circuit Python with an AdaFruit NeoTrellis M4

Dec 26, 2018

Description:

My son and I were working on an Adafruit NeoTrellis M4 Mainboard over the holidays. This amazing little device puts a NeoPixel + an Audio board + a USB port along with a 120 MHz Cortex M4 Core and a mic amplifier and you can program it with CircuitPython. CircuitPython is open source and on Github at https://github.com/adafruit/circuitpython. "CircuitPython is an education friendly open source derivative of MicroPython." It works with a bunch of boards including this NeoTrellis and it's just lovely for teaching and learning.

This item is just the mainboard! You'll almost certainly want two Silicone Elastomer 4x4 Pads and an enclosure to go along.

Circuit PythonAs with a lot of these small boards, when you plug a NeoTrellis into a your machine via USB you'll get new disk drive that pops up. All you have to do to "deploy" your code is copy it to your drive. Even better, why not just edit the code place?

There's a great Python editor called Mu that works well with Circuit Python. However, my son and I are more familiar with Visual Studio Code so we wanted to see how it worked with Circuit Python.

We installed the Python extension for VS Code as well as the Arduino extension for VS Code and the Arduino IDE directly from the Windows Store.

Fire up VS Code and File | Open Folder and open the Disk Drive of the NeoTrellis and open (or create) a code.py file. Then from the Command Palette (Ctrl-Shift-P) in VS Code select Arduino > Initialize. If you get an error you may need to set up the path to your Arduino IDE. If you installed it from the Windows Store like we did you may find it in a weird path. We set the arduino.path like this:

"arduino.path": "C:\\Program Files\\WindowsApps\\ArduinoLLC.ArduinoIDE_1.8.19.0_x86__mdqgnx93n4wtt"

The NeoTrellis M4 also shows up as a COM port so you can look at its Serial Output for debugging purposes as if it were an Arduino (because it is underneath). You then Arduino > Select a COM Port from the Command Palette and it will create a file called .vscode/arduino.json in your folder that will look like this:

{
"port": "COM3"
}

Trellis M4 is awesomeNow, within Visual Studio Code select Arduino > Open Serial Monitor and all of your print("") methods will output to that bottom pane.

Of course, we could putty into the COM Port but since I'm using this as a learning tool with my 11 year old, I find that a single window that shows both the console and the code help them focus, rather than managing multiple windows.

At this point we have a nice Developer Inner Loop going. That inner loop for us (the developers) is that we can write some code, hit save (Ctrl-S) and get immediate feedback. The application restarts when it detects the code.py file has changed and any debug (print) statements appear in the console immediately.

Visual Studio Code doing some Circuit Python

We are really enjoying this Adafruit NeoTrellis M4 Express kit. Next we're going to make a beat sequencer since the Christmas Soundboard was such a hit with mom!

Sponsor: Me! Hey friends, I've got a podcast I'm very proud of where I interview amazing people every week. Check it out at https://www.hanselminutes.com and please not only subscribe in your favorite podcasting app, but also tell your friends! Tweet about it and review it on iTunes/Google Play. Thanks!


© 2018 Scott Hanselman. All rights reserved.
     

The Fun of Finishing - Exploring old games with Xbox Backwards Compatibility

Dec 21, 2018

Description:

Star Wars: KOTORI'm on vacation for the holidays and I'm finally getting some time to play video games. I've got an Xbox One X that is my primary machine, and I also have a Nintendo Switch that is a constant source of joy. I recently also picked up a very used original PS4 just to play Spider-man but expanded to a few other games as well.

One of the reasons I end up using my Xbox more than any of my other consoles is its support for Backwards Compatibility. Backwards Compat is so extraordinary that I did an entire episode of my podcast on the topic with one of the creators.

The general idea is that an Xbox should be able to play Xbox games. Let's take that even further - Today's Xbox should be able to play today's Xbox games AND yesterday's...all the way back to the beginning. One more step further, shall well? Today's Xbox should be able to play all Xbox games from every console generation and they'll look better than you imagined them!

The Xbox One X can take 720p games and upscale them to 4k, use higher quality textures, and some games like Final Fantasy XIII have even been fully remastered but you still use the original disc! I would challenge you to play the original Red Dead Redemption on an Xbox One X and not think it was a current generation game. I recently popped in a copy of Splinter Cell: Conviction and it automatically loaded a 5-year-old save game from the cloud and I was on my way. I played Star Wars: KOTOR - an original Xbox game - and it looks amazing.

Red Dead Redemption

A little vacation combined with a lot of backwards compatibility has me actually FINISHING games again. I've picked up a ton of games this week and finally had that joy of finishing them. Each game I started up that had a save game found me picking up 60% to 80% into the game. Maybe I got stuck, perhaps I didn't have enough time. Who knows? But I finished. Most of these finishings were just 3 to 5 hours of pushing from my current (old, original) save games.

Crysis 2 - An Xbox 360 game that now works on an Xbox One X. I was halfway through and finished it up in a few days. Crysis 3 - Of course I had to go to the local retro game trader and pick up a copy for $5 and bang through it. Crysis is a great trilogy. Dishonored - I found a copy in my garage while cleaning. Turns out I had a save game in the Xbox cloud since 2013. I started right from where I left off. It's so funny to see a December 2018 save game next to a 2013 save game. Alan Wake - Kind of a Twin Peaks type story, or a Stephen King with a flashlight and a gun. Gorgeous game, and very innovative for the time. Mirror's Edge - Deceptively simple graphics that look perfect on 4k. This isn't just upsampling, to be clear. It's magic. Metro 2033 - Deep story and a lot of world building. Oddly I finished Metro: Last Light a few months back but never did the original. Sunset Overdrive - It's so much better than Jet Set Radio Future. This game has a ton of personality and they recorded ALL the lines twice with a male and female voice. I spoke to the voiceover artist for the female character on Twitter and I really think her performance is extraordinary. I had so much fun with this game that now the 11 year old is starting it up. An under-respected classic. Gears of War Ultimate - This is actually the complete Gears series. I was over halfway through all of these but never finished. Gears are those games where you play for a while and end up pausing and googling "how many chapters in gears of war." They are long games. I ended up finishing on the easiest difficulty. I want a story and I want some fun but I'm not interested in punishment. Shadow Complex - Also surprisingly long, I apparently (per my save game) gave up with just an hour to go. I guess I didn't realize how close I was to the end?

I'm having a blast (while the spouse and kids sleep, in some cases) finishing up these games. I realize I'm not actually accomplishing anything but the psychic weight of the unfinished is being lifted in some cases. I don't play a lot of multiplayer games as I enjoy a story. I read a ton of books and watch a lot of movies, so I look for a tale when I'm playing video games. They are interactive books and movies for me with a complete story arc. I love it when the credits role. A great single player game with a built-up universe is as satisfying (or more so) as finishing a good book.

What are you playing this holiday season? What have you rediscovered due to Backwards Compatibility?

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Enjoy some DOS Games this Christmas with DOSBox

Dec 19, 2018

Description:

I blogged about DOSBox five years ago! Apparently I get nostalgic around this time of year when I've got some downtime. Here's what I had to say:

I was over at my parents' house for the Christmas Holiday and my mom pulled out a bunch of old discs and software from 20+ years ago. One gaame was "Star Trek: Judgment Rites" from 1995. I had the CD-ROM Collector's edition with all the audio from the original actors, not just the floppy version with subtitles. It's a MASSIVE 23 megabytes of content!

DOSBox has ben providing joy in its reliable service for over 16 years and you should go check it out RIGHT NOW, if only to remind yourself of how good we have it now. DOSBox is an x86 and DOS Emulator - not a virtual machine. It emulates classic hardware like Sound Blaster cards and older graphics standards like VGA/VESA.

If a game runs too fast, you can slow it down by pressing Ctrl-F11. You can speed up games by pressing Ctrl-F12. DOSBox’s CPU speed is displayed in its title bar. Type "intro special" for a full hotkey list.

Note that DOSBox will start up TINY if you have a 4k monitor. There's a few things to you can do about it. First, ALT-ENTER will toggle DOSBox into full screen mode, although when you return to Windows your windows may find themselves resized.

For Windowed mode, I used these settings. You can't scale the window when output=surface, so experiment with settings like these:

windowresolution=1280 x 1024 output=ddraw

These are only the most basic initial changes you'll want to make. There's an enthusiastic community of DOSBox users that are dedicated to making it as perfect as possible. I enjoy this reddit thread debating "pixel perfect" settings. There's also a number of forks and custom builds of DOSBox out there that impose specific settings so be sure to explore and pick the one that makes you happy. It's also important to understand that aspect ratios and the size and squareness of a pixel will all change how your game looks.

I tend to agree with them that I don't want a blurry scaler. I want the dots/pixels as they are, simply made larger (2x, 3x, 4x, etc) with crisp edges at a reasonable aspect ratio. An interesting change you can make to your .conf file is the "forced" keyword after your scaler choice.

Here is scaler=normal3x (no forced)

Blurry DOSBox

and there's scaler-normal3x forced

The instructions say that forced means "the scaler will be used even if the result might not be desired." In this case, it forces the use of the scaler in text mode. Your mileage may vary, but the point is there's options and it's great fun. You may want scanlines or you may want crisp pixels.

I've found it all depends on what your memory of DOS is and what you're trying to do is to change the settings to best visualize that memory. My (broken) memory is of CRISP pixels.

Crisp DOSBox

Amazing difference!
The first thing you should do is add lines like these to the bottom of your dosbox.conf. You'll want your virtual C: drive mounted every time DOSBox starts up!

[autoexec] # Lines in this section will be run at startup. MOUNT C: C:\Users\scott\Dropbox\DosBox

If you want to play classic games but don't want the hassle (or questionable legality) of other ways, I'd encourage you to spend some serious time at https://www.gog.com. They've packaged up a ton of classic games so they "just work."

Bard's Tale 3 Space Quest 3

Enjoy! And THANK YOU to the folks that work on DOSBox for their hard work. It shows and we appreciate it.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Useful ASP.NET Core 2.2 Features

Dec 14, 2018

Description:

Earlier this week I talked about how I upgraded my podcast site to ASP.NET Core 2.2 and added Health Check features fairly easily. There's a ton of new features and so far it's been great running on my site with no issues. Upgrading from 2.1 is straightforward.

Better integration with popular Open API (Swagger) libraries including design-time checks with code analyzers Introduction of Endpoint Routing with up to 20% improved routing performance in MVC Improved URL generation with the LinkGenerator class & support for route Parameter Transformers (and a post from Scott Hanselman) New Health Checks API for application health monitoring Up to 400% improved throughput on IIS due to in-process hosting support Up to 15% improved MVC model validation performance Problem Details (RFC 7807) support in MVC for detailed API error results Preview of HTTP/2 server support in ASP.NET Core Template updates for Bootstrap 4 and Angular 6 Java client for ASP.NET Core SignalR Up to 60% improved HTTP Client performance on Linux and 20% on Windows

I wanted to look at just a few of these that I found particularly interesting.

You can get a very significant performance boost by moving ASP.NET Core in process with IIS.

Using in-process hosting, an ASP.NET Core app runs in the same process as its IIS worker process. This removes the performance penalty of proxying requests over the loopback adapter when using the out-of-process hosting model.

After the IIS HTTP Server processes the request, the request is pushed into the ASP.NET Core middleware pipeline. The middleware pipeline handles the request and passes it on as an HttpContext instance to the app's logic. The app's response is passed back to IIS, which pushes it back out to the client that initiated the request.

HTTP Client performance improvements are quite significant as well.

Some significant performance improvements have been made to SocketsHttpHandler by improving the connection pool locking contention. For applications making many outgoing HTTP requests, such as some Microservices architectures, throughput should be significantly improved. Our internal benchmarks show that under load HttpClient throughput has improved by 60% on Linux and 20% on Windows. At the same time the 90th percentile latency was cut down by two on Linux. See Github #32568 for the actual code change that made this improvement.

HTTP/2 is enabled by default. HTTP/2 may be sneaking up on you as for the most part "it just works." In ASP.NET Core's Kestral web server HTTP/2 is enabled by default over HTTPS. You can see here at both the command line and in Chrome I'm using HTTP/2 locally.

HTTP/2 locally

Here's Chrome. Note the "h2."

HTTP/2 in Chrome

Note that you'll only be able to get HTTP/2 when ALPN (Application-Layer Protocol Negotiation) is available. That means ALPN is supported on:

.NET Core on Windows 8.1/Windows Server 2012 R2 or higher .NET Core on Linux with OpenSSL 1.0.2 or higher (e.g., Ubuntu 16.04)

All in all, it's a solid release. Go check out the announcement post on ASP.NET Core 2.2 for even more detail!

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

How to set up ASP.NET Core 2.2 Health Checks with BeatPulse's AspNetCore.Diagnostics.HealthChecks

Dec 12, 2018

Description:

Availability TestsASP.NET Core 2.2 is out and released and upgrading my podcast site was very easy. Once I had it updated I wanted to take advantage of some of the new features.

For example, I have used a number of "health check" services like elmah.io, pingdom.com, or Azure's Availability Tests. I have tests that ping my website from all over the world and alert me if the site is down or unavailable.

I've wanted to make my Health Endpoint Monitoring more formal. You likely have a service that does an occasional GET request to a page and looks at the HTML, or maybe just looks for an HTTP 200 Response. For the longest time most site availability tests are just basic pings. Recently folks have been formalizing their health checks.

You can make these tests more robust by actually having the health check endpoint check deeper and then return something meaningful. That could be as simple as "Healthy" or "Unhealthy" or it could be a whole JSON payload that tells you what's working and what's not. It's up to you!

image

Is your database up? Maybe it's up but in read-only mode? Are your dependent services up? If one is down, can you recover? For example, I use some 3rd party back-end services that might be down. If one is down I could used cached data but my site is less than "Healthy," and I'd like to know. Is my disk full? Is my CPU hot? You get the idea.

You also need to distinguish between a "liveness" test and a "readiness" test. Liveness failures mean the site is down, dead, and needs fixing. Readiness tests mean it's there but perhaps isn't ready to serve traffic. Waking up, or busy, for example.

If you just want your app to report it's liveness, just use the most basic ASP.NET Core 2.2 health check in your Startup.cs. It'll take you minutes to setup.

// Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddHealthChecks(); // Registers health check services
}

public void Configure(IApplicationBuilder app)
{
app.UseHealthChecks("/healthcheck");
}

Now you can add a content check in your Azure or Pingdom, or tell Docker or Kubenetes if you're alive or not. Docker has a HEALTHCHECK directive for example:

# Dockerfile
...
HEALTHCHECK CMD curl --fail http://localhost:5000/healthcheck || exit

If you're using Kubernetes you could hook up the Healthcheck to a K8s "readinessProbe" to help it make decisions about your app at scale.

Now, since determining "health" is up to you, you can go as deep as you'd like! The BeatPulse open source project has integrated with the ASP.NET Core Health Check API and set up a repository at https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks that you should absolutely check out!

Using these add on methods you can check the health of everything - SQL Server, PostgreSQL, Redis, ElasticSearch, any URI, and on and on. Just add the package you need and then add the extension you want.

You don't usually want your health checks to be heavy but as I said, you could take the results of the "HealthReport" list and dump it out as JSON. If this is too much code going on (anonymous types, all on one line, etc) then just break it up. Hat tip to Dejan.

app.UseHealthChecks("/hc",
new HealthCheckOptions {
ResponseWriter = async (context, report) =>
{
var result = JsonConvert.SerializeObject(
new {
status = report.Status.ToString(),
errors = report.Entries.Select(e => new { key = e.Key, value = Enum.GetName(typeof(HealthStatus), e.Value.Status) })
});
context.Response.ContentType = MediaTypeNames.Application.Json;
await context.Response.WriteAsync(result);
}
});

At this point my endpoint doesn't just say "Healthy," it looks like this nice JSON response.

{
status: "Healthy",
errors: [ ]
}

I could add a Url check for my back end API. If it's down (or in this case, unauthorized) I'll get this a nice explanation. I can decide if this means my site is unhealthy or degraded.  I'm also pushing the results into Application Insights which I can then query on and make charts against.

services.AddHealthChecks()
.AddApplicationInsightsPublisher()
.AddUrlGroup(new Uri("https://api.simplecast.com/v1/podcasts.json"),"Simplecast API",HealthStatus.Degraded)
.AddUrlGroup(new Uri("https://rss.simplecast.com/podcasts/4669/rss"), "Simplecast RSS", HealthStatus.Degraded);

Here is the response, cool, eh?

{
status: "Degraded",
errors: [
{
key: "Simplecast API",
value: "Degraded"
},
{
key: "Simplecast RSS",
value: "Healthy"
}
]
}

This JSON is custom, but perhaps I could use the a built in writer for a free reasonable default and then hook up a free default UI?

app.UseHealthChecks("/hc", new HealthCheckOptions()
{
Predicate = _ => true,
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});

app.UseHealthChecksUI(setup => { setup.ApiPath = "/hc"; setup.UiPath = "/healthcheckui";);

Then I can hit /healthcheckui and it'll call the API endpoint and I get a nice little bootstrappy client-side front end for my health check. A mini dashboard if you will. I'll be using Application Insights and the API endpoint but it's nice to know this is also an option!

If I had a database I could check one or more of those for health well. The possibilities are endless and up to you.

public void ConfigureServices(IServiceCollection services)
{
services.AddHealthChecks()
.AddSqlServer(
connectionString: Configuration["Data:ConnectionStrings:Sql"],
healthQuery: "SELECT 1;",
name: "sql",
failureStatus: HealthStatus.Degraded,
tags: new string[] { "db", "sql", "sqlserver" });
}

It's super flexible. You can even set up ASP.NET Core Health Checks to have a webhook that sends a Slack or Teams message that lets the team know the health of the site.

Check it out. It'll take less than an hour or so to set up the basics of ASP.NET Core 2.2 Health Checks.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

How to remove words from the Windows Autocorrect Spell Check Dictionary

Dec 7, 2018

Description:

Well crap. I was typing really fast and got a squiggly, so I right-clicked on it and rather than selecting the correct word from the autocorrect dictionary, I clicked Add To Dictionary.

I added the MISSPELLED WORD to the Dictionary! Now Windows is suggesting that I spell this word (and others) wrong in all apps.

At this point I also realized that I had no idea how to REMOVE a word from the Windows Spell Check Dictionary. However, I do know that Windows isn't a black box so there must be a dictionary somewhere. It's gotta be a file or a registry key or something, right?

It's even easier than I thought it would be. The Windows 10 custom dictionaries are at %AppData%\Microsoft\Spelling\

The Windows 10 custom dictionaries are at %AppData%\Microsoft\Spelling\

I just opened the default.dic file in Notepad and removed the misspelled word.

Opening default.dic in Notepad

Whew. I can't tell you how many wrong words have found there way in there over the years. Hope this helps you in some small way.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.
© 2018 Scott Hanselman. All rights reserved.
     

Announcing WPF, WinForms, and WinUI are going Open Source

Dec 4, 2018

Description:

Buckle up friends! Microsoft is open sourcing WPF, Windows Forms (winforms), and WinUI, so the three major Windows UX technologies are going open source! All this is happening on the same day as .NET Core 3.0 Preview 1 is announced. Madness! ;)

.NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). Note that .NET Core 3 continues to be open source and runs on Windows, Linux, Mac, in containers, and in the cloud. In the case of WPF/WinForms/etc you'll be able to create apps for Windows that include (if you like) their own copy of .NET Core for a clean side-by-side install and even faster apps at run time. The Windows UI XAML Library (WinUI) is also being open sourced AND you can use these controls in any Windows UI framework.

That means your (or my!) WPF/WinForms/WinUI apps can all use the same controls if you like, using XAML Islands. I could take the now 10 year old BabySmash WPF app and add support for pens, improved touch, or whatever makes me happy!

WPF and Windows Forms projects are run under the .NET Foundation which also announced changes today and the community will guide foundation operations. The .NET Foundation is also changing its governance model by increasing the number of board members to 7, with just 1 appointed by Microsoft. The other board members will be voted on by the community! Anyone who has contributed to a .NET Foundation project can run, similar to how the Gnome Foundation works! Learn more about the .NET Foundation here.

On the runtime and versioning side, here's a really important point from the .NET blog that's worth emphasizing IMHO:

Know that if you have existing .NET Framework apps that there is not pressure to port them to .NET Core. We will be adding features to .NET Framework 4.8 to support new desktop scenarios. While we do recommend that new desktop apps should consider targeting .NET Core, the .NET Framework will keep the high compatibility bar and will provide support for your apps for a very long time to come.

I think of it this way. If you’ve got an existing app that you’re happy with, there is no reason to port this to .NET Core. Microsoft will support the .NET Framework for a very long time, given that it’s a part of Windows. But post .NET Framework 4.8. new features will usually only become available in .NET Core because Microsoft is drastically reducing the risk and thus rate of change for .NET Framework. So if you’re building a new app or you’re actively evolving an existing app you should really start looking at .NET Core. Porting to .NET Core certainly isn’t free, but it offers many benefits, such as better performance, XCOPY deployment for the framework itself, and feature set that is growing fast, thanks to open source. Choose the strategy that makes sense for your project and/or business.

I don't want to hear any of this "this is dead, only use that" nonsense. We just open sourced WinForms and have already taken Pull Requests. WinForms has been updated for 4k+ displays! WPF is open source, y'all! Think about the .NET Standard and how you can run standard libraries on .NET Framework, .NET Core, and Mono - or any ".NET" that's out there. Mono is enabling running .NET Standard libraries via WebAssembly. To be clear - your browser is now .NET Standard capable! There are open source projects like https://platform.uno/ and Avalonia and Ooui taking .NET in new and interesting places. Blazor makes Web UIs in .NET with (preview/experimental) client support with Web Assembly and server support included in .NET 3.0 with Razor Components. Only good things are coming, my friends!

.NET ALL THE THINGS

.NET Core runs on Raspberry Pi and ARM processors! .NET Core supports serial ports, IoT devices, and there's even a System.Device.GPIO (General Purpose I/O) package! Go explore https://github.com/dotnet/iot to really get your head around how much cool stuff is happening in the .NET space.

I want to encourage you to go check out Matt Warren's extremely well-researched post "Open Source .NET - 4 years later" to get a real visceral sense of how far we've come as a community. You'll be amazed!

Now, go play!

Download .NET Core 3 Preview 1 on Windows, Mac and Linux. You can see details of the release in the .NET Core 3 Preview 1 release notes Visual Studio 2019 will support building .NET Core 3 applications and the VS2019 preview can be installed side by side with existing versions of VS.

Enjoy.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

On Developer Advocacy

Nov 30, 2018

Description:

TeamworkNaming things is hard. I've talked before about the term "evangelism" and my dislike for it. Evangelism, Advocacy, Developer Relations, PR, Marketing, and on and on. More and more I'm just trying to educate and maybe entertain a little. So I like Edutainment, myself, hat tip to KRS-One.

I'm getting on a plane tomorrow to go to the Microsoft Azure + AI Conference @DevIntersection and the Free Microsoft Connect 2018 Event (you can watch online all day!) and as I was packing I was struck with a few thoughts I wanted to share here.

What a privilege it is to speak about products that so many people have worked on and (hopefully) so many people will enjoy. Especially ones as large as Azure or Visual Studio - thousands of people work so hard! Engineers, Program Managers, Testers, Community Members...people from all over working on each release so a select few of us get on stage to share it with you! And who am I to have this privilege?

Don't think for a second that when you're giving a technical talk that it's about you. You're sitting on a stack of software you had a small part in writing and standing on the shoulders of giants of generations of engineers and creators who came before you. When I do talks where I'm representing a huge group I reflect on this with gratitude.

If you work on any of the products I'm showing, know this; I may be one of the talking heads or a visible grand marshal but we work for you and we never forget it. My job at events like this is to make the product - your work - shine. I take that job very seriously, and if it looks like it's effortless, that's because of the massive amount of work we put into the presentation. Hours of practice, story arcs, literally blocking movement as if it were a play or stage show, camera work, and transitions. Deeping understanding what we're presenting and why it's awesome and why you're proud of it.

I'm writing this note for all the other advocates and visible community members.

What a joy and privilege it is to stand up and represent our co-workers and follow engineers and to tell the stories of the things they build!

Let that privilege both put motivation in you and propel you forward to present their work - your teams' work.

I appreciate you all, both inside and out, and I'll will do my best to represent your team and the larger community to the best of my ability.

Sponsor: Looking for a new challenge? Hired is the leading job marketplace that connects engineers to their next challenge. Let Hired connect you to your next challenge. Sign up now.


© 2018 Scott Hanselman. All rights reserved.
     

The 2018 Christmas List of Best STEM Toys for Kids

Nov 28, 2018

Description:

Hey friends! This is my FIFTH year doing a list of Great STEM Christmas Toys for Kids! Can you believe it? In case you missed them, here's the previous years' lists! Be aware I use Amazon referral links so I get a little kickback (and you support this blog!) when you use these links. I'll be using the pocket money to...wait for it...buy STEM toys for kids! So thanks in advance!

The 2017 Christmas List of Best STEM Toys for kids The 2016 Christmas List of Best STEM Toys for your little nerds and nerdettes The 2015 Christmas List of Best STEM Toys 2014: Getting Started with Robots for kids and children in STEM this holiday season

OK, let's do it!

littleBits

I've always liked littleBits but when they first came out I thought they were expensive and didn't include enough stuff. Fast forward and littleBits have dropped in price and built a whole ecosystem of littleBits that work together. This year the most fun is the littleBits Marvel Avengers Inventor Kit. At the time of this writing, this kit is 33% off at Amazon. You can built your own Iron Man (or Ironheart!) gauntlet and load it up with littleBits that can do whatever you'd like. One particularly cool thing included is an LED Matrix that you can address directly by writing code with the iOS or Android app.

littleBits Marvel Avengers Inventor Kit

Kano - Computer Kit and Wand

Both my kids love the Kano Computer Kit, now updated for 2018. It's a complete Raspberry Pi 3 kit that includes the keyboard, mouse, case, LED lights, and everything you'd need to build a Pi. This year they've branched out to the Kano Happy Potter Coding Kit that you can use to build a wand and learn to code. The "wand" is a custom PCB with codeable LEDs, buttons, and batteries that the kids put inside a wand. The wand is Bluetooth and includes lots of tech like an accelerometer, gyroscope, magnetometer, and a vibrating rumble pack. All of this tech is controllable with laptops or smart devices and code with JavaScript.

Harry Potter Kano Coding Kit and Wand

UbTech JIMU Robot - Unicornbot Kit

UbTech has a whole series of Technics-style Robot kits. There's the usual tanks and cars, but there's also some more creative and "out there" ones like this 400-piece Unicorn Robot. It includes color sensors, server motors, a DC motor, and a light up horn. It's also codeable/controllable via an iOS or Android app. Very cool!

I'd really like their Lynx Alexa controllable walking robot but it's way out of my price range. Still fun to check out though!

Unicornbot

Erector by Meccano Kits

We've found these Erector by Meccano Kits to be inexpensive and well-built. The 25-in-1 kit is great and includes a container and over 600 pieces. I like these metal kits because they feel like the ones I had in my childhood. Kids learn how to use motors, pulleys, and other explore functional motion.

Erector Set

Osmo Genius Kit for iPad

The Osmo Genius is quite clever and based on one deceptively simple idea - what if the iPad camera faced downward and could see the table in front of the child? It came with a base and a reflector that directs the front-facing camera downwards. Then the educational games are written to see what's happening on the table and provide near-instant feedback. You can start with the base kit and later optionally add kits and games.

Osmo Genius Kit for iPad

Elenco 130-in-1 Electronic Playground and Learning Center

I like classic toys and while toys with bluetooth and fancy features are cool, I want to balance it out with the classics that let you explore the physical world. These also tend to be more affordable as well.

I really like this classic electronic trainer with 130 experiments like an AM broadcast station, Electronic Organ, LED strobe light, Timer, Logic Circuits and much, much more. The 50-in-One version is just $16! Frankly all the Elenco products are fantastic.

image

Piper Computer Kit (2018 Edition)

I had this on the list last year but my kids still love it. We have the 2016 kit and it's been updated for 2018.

The Piper is a little spendy at first glance, but it's EXTREMELY complete and very thoughtfully created. Sure, you can just get a Raspberry Pi and hack on it - but the Piper is not just a Pi. It's a complete kit where your little one builds their own wooden "laptop" box (more of a luggable), and then starting with just a single button, builds up the computer. The Minecraft content isn't just vanilla Microsoft. It's custom episodic content! Custom voice overs, episodes, and challenges.

What's genius about Piper, though, is how the software world interacts with the hardware. For example, at one point you're looking for treasure on a Minecraft beach. The Piper suggests you need a treasure detector, so you learn about wiring and LEDs and wire up a treasure detector LED while it's running. Then you run your Minecraft person around while the LED blinks faster to detect treasure. It's absolute genius. Definitely a favorite in our house for the 8-12 year old set.

Piper Raspberry Pi Kit

I hope you have a great holiday season!

FYI: These Amazon links are referral links. When you use them I get a tiny percentage. It adds up to taco money for me and the kids! I appreciate you - and you appreciate me-  when you use these links to buy stuff.

Sponsor: Let top companies apply to you. Create a free profile on Hired and unlock the ability to let companies apply to you, not the other way around. Create a free profile.


© 2018 Scott Hanselman. All rights reserved.
     

Upgrading the DakBoard Family Calendar with Raspberry Pi Zero W and Read Only filesystem

Nov 23, 2018

Description:

Raspberry Pi Zeros are SMALLEarlier this week I built a Family Calendar using a used flat screen monitor and a Raspberry Pi 3 I had lying around and documented it in my post How to build a Wall Mounted Family Calendar and Dashboard with a Raspberry Pi and cheap monitor.

Eric Brown added two great comments (the comments on my blog are always better than the content!) He said:

You can save power & money by using an Pi Zero W instead. This is likely overkill, but I took the time to get the Pi Zero to mount the SD card read-only and do all the writes to a RAM disk.

Eric said "RPis are surprisingly sensitive to power glitches, and will often corrupt the SD card" and that "after mounting the SD read-only, my DakBoard has been running stably for months; before doing that, it corrupted the SD card within 6 weeks."

While I haven't had any issues with my Raspberry Pis, this seemed like a fun "version 2" of the calendar to make with the kids. Worst case scenario? Now I have LCD family calendars!

You'll recall I commented about how important the Spouse Acceptance Factor is whenever introducing new technology into the house.

It has to just work. If my Spouse doesn't like the idea or find its not reliable, the SAF (Spouse Acceptance Factor) will be low and they'll want to get rid of it. All it takes is one "why isn't this working" and I'm dead in the water.

I checked Amazon and found a number of Raspberry Pi Zero W (W is for Wireless, important!) Kits for around US$20. You can see in the picture above how SMALL a Raspberry Pi Zero W is (with LEGO Miss Marvel for scale).

Get the HDMI cables as flush an sanitary as possible

If you have the cables, power supplies, and don't need the headers and extra stuff, I've seen them as low as $10. It's very important to note that a Raspberry Pi Zero W does support HDMI but it has a MINI-HDMI female connector. You'll need a mini-HDMI to HDMI adapter or a mini-HDMI to HDMI short cable.

Here's another aside. Did you know there are a LOT of different HDMI connector orientations? Sure, you could just loop a big old 6 foot HDMI cable back there, but where's the fun in that? There are micro HDMI D1,D2,D3 that describe 90 degree and 270 degree rotations of the male. If you want to be really flush, consider a cable (for example like a C2 to A2) that is usually used in drones. This would allow you to mount the Pi Zero W flush against the back of the monitor - or even better, inside the monitor or a wooden picture frame!

Dakboard

Get the Raspberry Pi Zero W on your wireless and avoid the trouble of keyboards and mice!

Pi Zero Ws are so small that they don't have a regular USB connector. There is one for power and one that is "USB OTG." If you want to connect a mouse and keyboard directly to the Zero you'll need this USB OTG Micro to Type A Cable and/or a powered USB hub.

OR!

Save money and prep your Raspberry Pi Micro SD Card with SSH turned on by default and your Wireless Network enabled by default! Then you can set it up remotely as a DakBoard/MagicMirror Family Calendar.

Download the Image for Raspbian Stretch. You'll want the desktop version (not Lite) because this IS a visual project, not a headless one! I recommend Etcher for burning images to SD Cards. It's free. Raspberry Pi Zero W and a 1A+ micro USB power supply Cheap micro SD Card. They should include an adapter to plug it into your main computer to prepare. Create an empty file called "ssh" on the prepared Micro SD Card before you put the card in the Raspberry Pi Make a file called wpa_supplicant.conf with Linux line feeds (LF, not the default Windows CR/LF) with content like this (and your own country code)country=us
update_config=1
ctrl_interface=/var/run/wpa_supplicant

network={
scan_ssid=1
ssid="YourNetworkSSID"
psk="NETWORKPASSWORD!"
}

This will cause the Pi to get on the network on boot up which should allow you to SSH over to it directly, thereby avoiding any trouble with keyboards and mice and the Pi Zero W.

If you DO end up wanted to connect the keyboard and mouse, you'll want a keyboard/mouse setup that is all in one with just one USB adapter or you'll need a Powered USB Hub. This should be temporary as you get the Pi prepared.

Make the Raspberry Pi Zero W readonly - after it's been configured with DakBoard

Once I had the Pi Zero W all prepared I went around the net looking for tutorials to make it readonly. You're basically causing Linux to mount the SD Card readonly and then do all writes to a RAM Disk that will ultimately be tossed whenever you (rarely) reboot. Get it perfect before you go readonly as it's a small hassle to switch back. Or you can pull the card out and mount it on your other computer then return it. Still, not awesome.

Eric from the comments pointed me to a Raspberry Pi Jesse tutorial, but I tried it and it didn't work for me, likely because I'm on Raspbian Stretch, a newer version. There's a LOT of choices and ways to do this but the best tutorial I found was on the page for Domoticz, a n open source Home Automation system which looks, as an aside, awesome and something I need to check out in the future!

For now, I followed these instructions on Setting up overlayFS on Raspberry PI (the "overlay" being the file system you'll write to but it's a fake, the writes are going to one folder and the two foldkers (one read-write and one read-only are overlaid over each other). This allowed me to make a Raspberry Pi Raspbian Stretch system Readonly on my Pi Zero W.

I followed the instructions exactly, only skipping the parts like "Modify domoticz service" that didn't apply. When I run "mount" I can see the main file system is read-only and the others are overlaid and read-write.

pi@dakboard2:~ $ mount
/dev/mmcblk0p7 on / type ext4 (ro,noatime,data=ordered)
snip!
ramdisk on /var_rw type tmpfs (rw,relatime)
ramdisk on /home_rw type tmpfs (rw,relatime)
overlay on /home type overlay (rw,relatime,lowerdir=/home_org,upperdir=/home_rw/upper,workdir=/home_rw/work)
overlay on /var type overlay (rw,relatime,lowerdir=/var_org,upperdir=/var_rw/upper,workdir=/var_rw/work) So far so good! This will make a smaller and lower power Family Calendar that will hopefully be more reliable as well! Thanks Eric from the comments!

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.
© 2018 Scott Hanselman. All rights reserved.
     

How to build a Wall Mounted Family Calendar and Dashboard with a Raspberry Pi and cheap monitor

Nov 21, 2018

Description:

Glanceable DashboardI love dashboards. I love Raspberry Pis (tiny $35 computers the size of a set of playing cards). And I'm cheap frugal. I found a 24" old LCD at Goodwill (a local thrift shop) and bought it but it's been sitting unused in my garage.

Then I stumbled on DakBoard. The idea is simple - A wifi connected wall display for your photos, calendar, news, weather and to-do.

The implementation is simple genius. It's a browser that starts up full screen (kiosk mode) and just sits there and updates occasionally. DakBoard provides the private webpage and tools to make that happen. You can certainly build this yourself with any number of open source tools. I chose DakBoard because it was simple, beautiful, and I was able to get the whole thing done in less than an hour. I'm sure I'll spend many hours tweaking it through. There's also the very popular MagicMIrror platform, so lots of choice and power in this space!

What are some considerations?

You may want to turn it off on a scheduled to save power and the screen cronjob - turn it off on a schedule sensor - turn it on when something (your alarm, nest, thermostat motion detector attached to GPIO, etc) detects your presence) It has to act like an appliance. If you are messing with it to keep it alive, it's not an appliance, it's another computer to manage. It has to just work. If my Spouse doesn't like the idea or find its not reliable, the SAF (Spouse Acceptance Factor) will be low and they'll want to get rid of it. All it takes is one "why isn't this working" and I'm dead in the water. Finally - What do you want to show?

Someone asked me - "What would I want to put on my dashboard other than a calendar? I don't see why this is useful."

What would you put on a Glance-eable Display?

Family Calendar(s), movie times, temperature, news, my blood sugar, disk free on my NAS, TV schedule, family photos, commute traffic, album releases, homework due soon, family events, trips, flight status, music playing now, literally anything you want as a glance-able display. 

Glanceable Dashboard

Philosophy

You'll want to ask yourself, is this just an iPad on the wall? I'd propose not. In fact, I'd say this is a Wall Mounted Glanceable Display - a personal dashboard - not an interactive thing. I want the family and kids to just stop by, note important information and move on.

It's also worth pointing out the a horizontal monitor on the wall looks like, well, a monitor on the wall. But somehow when it's Portrait it's dramatic. It's not something we are (yet) used to seeing. I may try this out in a few ways, or even make a few of these displays!

How to Build a Raspberry Pi-based Family Calendar

It's pretty easy! I used the DakBoard Blog but I had most of the stuff already.

Get a $35 Raspberry Pi 3. The 3 is fast and includes Wifi so you don't need an extra adapter. I like a 2.5A powersupply but some folks say you can run the Raspberry Pi off the monitor's USB power - IF that power can put out at least 1A. 500mA will likely cause instability. It depends on if you want to try to get the whole thing down to one power cable. Cheap SD Card - 8 gigs is fine, but get whatever works for you. This doesn't need to be awesome. A 1 foot HDMI cable. You're gonna mount the Raspberry Pi to the back of the monitor and hide it so you want the cable to be as small as possible. You might need a 90 degree or 270 degree adapter to avoid HDMI cables from sticking out or a short cable with this built in. And finally - a 24" ish (smaller is fine) LCD (IPS is nice) monitor with smallish bezels and HDMI inputs that go out to the side (NOT directly out the back) as you want this flush on the wall. Think about how you'll mount it. You can take the back off the monitor and use hanging wire OR use a flush VESA mount.

Install Raspbian on the Raspberry Pi. I use Noobs to bootstrap my install as it's super fast and easy. Go through the standard setup. Make sure you've set up:

Wifi login Timezone Boot to Desktop automatically install chromium via "sudo apt-get install -y rpi-chromium-mods"

Then you make sure that Chromium starts up full screen, the mouse is hidden, and we're looking at the dashboard! It's super important you don't have to touch it. It's an appliance, right?

sudo nano ~/.config/lxsession/LXDE-pi/autostart @xset s off @xset -dpms @xset s noblank @chromium-browser --noerrdialogs --incognito --kiosk http://dakboard.com/app/?p=YOUR_PRIVATE_URL

Then you can set up a cronjob if you want to turn the Pi's screen on and off on a schedule. Using rpi-hdmi.sh you can make a crontab -e that looks like this:

# Turn HDMI Off (22:00/10:00pm) 0 22 * * * /home/pi/rpi-hdmi.sh off # Turn HDMI On (7:00/7:00am) 0 7 * * * /home/pi/rpi-hdmi.sh on

My family uses Google Calendar (GSuite) to manage hanselman.com, but I use Outlook at work. I also have a lot of business/work crap in my calendar that the family doesn't need to see. So I have two problems here, filtering, and appointment movement between Work and Home.

My wife and kids use Google Calendar and it's their authoritative source. My work calendar is MY authoritative source, so I want to sync Outlook->Google but ONLY including Personal/Podcasts/Travel categories. I categorize in Outlook at work, and then those appointments that are appropriate for the family calendar get moved over. Then the Family Calendar dashboard includes color coordinated items for Mom, Dad, Kid1, Kid2. The kids include homework that's due as appointments.

I use the Outlook Google Calendar Sync open source project to do this calendar movement for me. It does require Outlook and is a client solution so if you have a better idea let me know.

GOTCHA: I have been using Google Calendar for YEARS. I have also been using sync tools like this for years. As such, I was noticing that sometimes DakBoard would timeout asking for my Google Calendar's ICS file. It would take minutes. So I requested it myself and it was 26 megs. It's clear that Google calendar doesn't care deeply about iCal and that's disappointing. This could easily be solved if they'd support some kind of OData like URL-based query for fromdate=, todate=. In this case, the DakBoard was getting 26 megs over and over to just show a few weeks of appointments. I literally had appointments from 2005 in the calendar. I decided that since I'd declared Outlook my authoritative source for my calendar that I'd take an archive (one time snapshot) of my iCal and then delete all my calendar items from Google Calendar and re-sync, one way, from the authoritative source, going back 1 year. I'm likely a rare case but it's worth noting in case you bump into this.

All in all, this can easily be done in a short few hours if you have a Pi and a monitor. The time will be spent making it "sanitary." Making the cables perfect, hanging it on the wall, hiding the cables, then tweaking the screen to be perfect.

Editing screens on DakBoard

DakBoard has a free option that works great, or a Premium subscription that gives you even more control. Again, it depends on your web/art ability, and your patience. This is a fun new world that I'm excited to get involved with and my family is already stoked about this new display as we enter the holiday season.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Compiling C# to WASM with Mono and Blazor then Debugging .NET Source with Remote Debugging in Chrome DevTools

Nov 16, 2018

Description:

Blazor quietly marches on. In case you haven't heard (I've blogged about Blazor before) it's based on a deceptively simple idea - what if we could run .NET Standard code in the browsers? No, not Silverlight, Blazor requires no plugins and doesn't introduce new UI concepts. What if we took the AOT (Ahead of Time) compilation work pioneered by Mono and Xamarin that can compile C# to Web Assembly (WASM) and added a nice UI that embraced HTML and the DOM?

Sound bonkers to you? Are you a hater? Think this solution is dumb or not for you? To the left.

For those of you who want to be wacky and amazing, consider if you can do this and the command line:

$ cat hello.cs
class Hello {
static int Main(string[] args) {
System.Console.WriteLine("hello world!");
return 0;
}
}
$ mcs -nostdlib -noconfig -r:../../dist/lib/mscorlib.dll hello.cs -out:hello.exe
$ mono-wasm -i hello.exe -o output
$ ls output
hello.exe index.html index.js index.wasm mscorlib.dll

Then you could do this in the browser...look closely on the right side there.

You can see the Mono runtime compiled to WASM coming down. Note that Blazor IS NOT compiling your app into WASM. It's sending Mono (compiled as WASM) down to the client, then sending your .NET Standard application DLLs unchanged down to run within with the context of a client side runtime. All using Open Web tools. All Open Source.

Blazor uses Mono to run .NET in the browser

So Blazor allows you to make SPA (Single Page Apps) much like the Angular/Vue/React, etc apps out there today, except you're only writing C# and Razor(HTML).

Consider this basic example.

@page "/counter"

<h1>Counter</h1>
<p>Current count: @currentCount</p>
<button class="btn btn-primary" onclick="@IncrementCount">Click me</button>

@functions {
int currentCount = 0;
void IncrementCount() {
currentCount++;
}
}

You hit the button, it calls some C# that increments a variable. That variable is referenced higher up and automatically updated. This is trivial example. Check out the source for FlightFinder for a real Blazor application.

This is stupid, Scott. How do I debug this mess? I see you're using Chrome but seriously, you're compiling C# and running in the browser with Web Assembly (how prescient) but it's an undebuggable black box of a mess, right?

I say nay nay!

C:\Users\scott\Desktop\sweetsassymollassy> $Env:ASPNETCORE_ENVIRONMENT = "Development"
C:\Users\scott\Desktop\sweetsassymollassy> dotnet run --configuration Debug
Hosting environment: Development
Content root path: C:\Users\scott\Desktop\sweetsassymollassy
Now listening on: http://localhost:5000
Now listening on: https://localhost:5001
Application started. Press Ctrl+C to shut down.

Then Win+R and run this command (after shutting down all the Chrome instances)

%programfiles(x86)%\Google\Chrome\Application\chrome.exe --remote-debugging-port=9222 http://localhost:5000

Now with your Blazor app running, hit Shift+ALT+D (or Shift+SILLYMACKEY+D) and behold.

Feel free to click and zoom in. We're at a breakpoint in some C# within a Razor page...in Chrome DevTools.

HOLY CRAP IT IS DEBUGGING C# IN CHROME

What? How?

Blazor provides a debugging proxy that implements the Chrome DevTools Protocol and augments the protocol with .NET-specific information. When debugging keyboard shortcut is pressed, Blazor points the Chrome DevTools at the proxy. The proxy connects to the browser window you're seeking to debug (hence the need to enable remote debugging).

It's just getting started. It's limited, but it's awesome. Amazing work being done by lots of teams all coming together into a lovely new choice for the open source web.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Web Development and Advanced Techniques with Linux on Windows (WSL)

Nov 14, 2018

Description:

I've posted several times on the Windows Subsystem for Linux that allows you to run Linux on Windows 10 without a VM. Check out my YouTube on Editing code and files on Windows Subsystem for Linux on Windows 10. There's just one rule. You can mess with Windows files from Linux but you can't mess with Linux files from Windows. Otherwise, go crazy and enjoy. Here's some of my previous posts you should check out:

The year of Linux on the (Windows) Desktop - WSL Tips and Tricks Setting up a Shiny Development Environment within Linux on Windows 10 Installing Fish Shell on Ubuntu on Windows 10 Writing and debugging Linux C++ applications from Visual Studio using the "Windows Subsystem for Linux" Ubuntu now in the Windows Store: Updates to Linux on Windows 10 and Important Tips

WSL is pretty fantastic although its disk access is slower than native Linux, I find myself using it every day. If you want to setup Linux on your Windows 10 machine, just turn it on, then head over to the Windows Store and search for "Linux."

You can turn on Linux on Windows 10 by typing "Windows Features" and checking "Windows Subsystem for Linux." Then get a Linux from the Windows Store.

If you prefer to use PowerShell and do it in one line, just do this from an Admin PowerShell prompt:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Then go get any one (or more!) of these from the Store:

Ubuntu OpenSUSE SLES Kali Linux Debian GNU/Linux

When you're in a Windows shell like PowerShell or CMD you might want to run Linux and/or jump comfortably between shells. You can do that in a few ways. The best and recommended way is running "wsl.exe" as that will start up your default distro. You can also just type the name of the distro. So I can type "ubuntu" and get in there directly.

You can type "bash" but that's not recommended if you've changed shells. If you've set up zsh or fish and type bash, it's gonna still try to run bash.

Here I've typed wslconfig and you can see I've got both Ubuntu and Debian installed, with Ubuntu as the default when I type "wsl."

C:\Users\scott>wslconfig /list
Windows Subsystem for Linux Distributions:
Ubuntu-18.04 (Default)
Debian

Now that I know how to run wsl from anywhere I can even pipe stuff in and out it Linux from outside. For example here I am in cmd.exe but I'm calling commands in Linux, that come out, then back in, etc. You can mix and match however you'd like!

C:\dev>type hello.sh
echo Hello
C:\dev>wsl cat /mnt/c/dev/hello.sh | wsl fromdos | wsl /bin/sh
Hello

This means even when I'm in CMD or PowerShell I can use Linux commands that are convenient or familiar to me. For example, here I'm piping a Windows Update log file into a the Linux command sha1sum command. Note the use of - to accept standard input - even though that input is from Windows!

C:\Users\scott\Desktop>type WindowsUpdate.log | wsl sha1sum -
3b48adce8f6c9cb816e8845d824dacc0440ca1b8 -

Sweet. There's a number of nice advanced techniques if you want to make your WSL installations smarter AND automatically configured.  You can make a file in /etc/wsl.conf to affect your DNS, metadata and driving mounting.

When you are in a WSL shell, your Windows drive (your main drive) is at /mnt/c. So here is my Windows desktop as viewed from WSL:

screenfetch in WSL

I most of my dev work in /mnt/d/github for example. That way I can use VS Code from Windows but run Node/Ruby/Go/Whatever from WSL.

I keep my files on my Windows drive, edit them in VS Code, but run things in WSL. Again, never use Windows utilities to reach into and/or edit files on the WSL/Linux subsystem. Also, always been conscious of your CR/LF situation, and be real conscious if you're going to run git in both Windows and WSL.

Here's VS Code at the top, WSL/Ubuntu running Node at the bottom, and the local node app running in Edge on Windows on the lower right. We are sharing file systems and network port space:

Cross platform Web Dev

You can even share environment variables between WSL and Windows with a special environment variable called WSLENV. This is pretty advanced but super powerful. Read this carefully. You make a environment variable that is a list of names of other variables that you want translated between environments.

That means you can do something like this. I'm in WSL and I have an environment variable that points to a location on the filesystem. I need it to be correct in both worlds.

scott@IRONHEART:/mnt/d$ export MYLINUXPATH=/mnt/d/github/expresstest
scott@IRONHEART:/mnt/d$ export WSLENV=MYLINUXPATH/p
scott@IRONHEART:/mnt/d$ cmd.exe
D:\>echo %MYLINUXPATH%
D:\github\expresstest

Read that carefully. It's awesome and it's very configurable.

There's lots of users of WSL and many have assembled great lists of resources like Awesome-WSL by Hayden.

It's also worth pointing out that WSL is just now one console you can choose from. There's PowerShell, CMD.exe, and a half dozen Linuxes. You can even make your own custom Linux Distro for your company if you like. And there's a whole world of 3rd party Consoles that sit on top of/replace conhost.exe so you can have consoles with tabs, cool fonts, ones based on web tech, whatever! You can even choose WSL/bash as your default shell in Visual Studio Code if you'd like with Ctrl+~.

Hope this gets you started with Linux on Windows. What did I miss? Sound off in the comments.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Terminus and FluentTerminal are the start of a world of 3rd party OSS console replacements for Windows

Nov 9, 2018

Description:

Folks have been trying to fix supercharge the console/command line on Windows since Day One. There's a ton of open source projects over the year that try to take over or improve on "conhost.exe" (the thing that handles consoles like Bash/PowerShell/cmd on Windows). Most of these 3rd party consoles have weird or subtle issues. For example, I like Hyper as a terminal but it doesn't support Ctrl-C at the command line. I use that hotkey often enough that this small bug means I just won't use that console at all.

Per the CommandLine blog:

One of those weaknesses is that Windows tries to be "helpful" but gets in the way of alternative and 3rd party Console developers, service developers, etc. When building a Console or service, developers need to be able to access/supply the communication pipes through which their Terminal/service communicates with command-line applications. In the *NIX world, this isn't a problem because *NIX provides a "Pseudo Terminal" (PTY) infrastructure which makes it easy to build the communication plumbing for a Console or service, but Windows does not...until now!

Looks like the Windows Console team is working on making 3rd party consoles better by creating this new PTY mechanism:

We've heard from many, many developers, who've frequently requested a PTY-like mechanism in Windows - especially those who created and/or work on ConEmu/Cmder, Console2/ConsoleZ, Hyper, VSCode, Visual Studio, WSL, Docker, and OpenSSH.

Very cool! Until it's ready I'm going to continue to try out new consoles. A lot of people will tell you to use the cmder package that includes ConEmu. There's a whole world of 3rd party consoles to explore. Even more fun are the choices of color schemes and fonts to explore.

For a while I was really excited about Hyper. Hyper is - wait for it - an electron app that uses HTML/CSS for the rendering of the console. This is a pretty heavyweight solution to the rendering that means you're looking at 200+ megs of memory for a console rather than 5 megs or so for something native. However, it is a clever way to just punt and let a browser renderer handle all the complex font management. For web-folks it's also totally extensible and skinnable.

As much as I like Hyper and its look, the inability to support hitting "Ctrl-C" at the command line is just too annoying. It appears it's a very well-understood issue that will ultimately be solved by the ConPTY work as the underlying issue is a deficiency in the node-pty library. It's also a long-running issue in the VS Code console support. You can watch the good work that's starting in this node-pty PR that will fix a lot of issues for node-based consoles.

Until this all fixes itself, I'm personally excited (and using) these two terminals for Windows that you may not have heard of.

Terminus

Terminus is open source over at https://github.com/Eugeny/terminus and works on any OS. It's immediately gorgeous, and while it's in alpha, it's very polished. Be sure to explore the settings and adjust things like Blur/Fluent, Themes, opacity, and fonts. I'm using FiraCode Retina with Ligatures for my console and it's lovely. You'll have to turn ligature support on explicitly under Settings | Appearance.

Terminus is a lovely console replacement

Terminus also has some nice plugins. I've added Altair, Clickable-Links, and Shell-Selector to my loadout. The shell selector makes it easy on Windows 10 to have PowerShell, Cmd, and Ubuntu/Bash open all at the same time in multiple tabs.

I did do a little editing of the default config file to set up Ctrl-T for new tab and Ctrl-W for close-tab for my personal taste.

FluentTerminal

FluentTerminal is a Terminal Emulator based on UWP. Its memory usage on my machine is about 1/3 of Terminus and under 100 megs. As a Windows 10 UWP app it looks and feels very native. It supports ALT-ENTER Fullscreen, and tabs for as many consoles as you'd like. You can right-click and color specific tabs which was a nice surprise and turned out to be useful for on-the-fly categorization.

image

FluentTerminal has a nice themes setup and includes a half-dozen to start, plus supports imports.

It's not yet in the Windows Store (perhaps because it's in active development) but you can easily download a release and install it with a PowerShell install.ps1 script.

I have found the default Keybindings very intuitive with the usual Ctrl-T and Ctrl-W tab managers already set up, as well as Shift-Ctrl-T for opening a new tab for a specific shell profile (cmd, powershell, wsl, etc).

Both of these are great new entries in the 3rd party terminal space and I'd encourage you to try them both out and perhaps get involved on their respective GitHubs! It's a great time to be doing console work on Windows 10!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Updating my ASP.NET Website from .NET 2.2 Core Preview 2 to .NET 2.2 Core Preview 3

Nov 7, 2018

Description:

I've recently returned from a month in South Africa and I was looking to unwind while the jetlagged kids sleep. I noticed that .NET Core 2.2 Preview 3 came out while I wasn't paying attention. My podcast site runs on .NET Core 2.2 Preview 2 so I thought it'd be interesting to update the site. That means I'd need to install the new SDK, update the project references, ensure it builds in Azure DevOps's CI/CD Pipeline, AND deploys and runs in Azure.

Let's see how it goes. I'm a little out of it but I'm writing this blog post AS I DO THE WORK so you'll see my train of thought with no editing.

Ok, what version of .NET Core does this machine have?

C:\Users\scott> dotnet --version
2.2.100-preview2-009404
C:\Users\scott> dotnet tool update --global dotnet-outdated
Tool 'dotnet-outdated' was successfully updated from version '2.0.0' to version '2.1.0'.

Looks like I'm on Preview 2 as I guessed. I'll take a moment and upgrade one Global Tool I love - dotnet-outdated - in case it's been updated since I've been out. Looks like it has a minor update. Dotnet Outdated is a great utility for checking references and you should absolutely be using it or another tool like NuKeeper or Dependabot.

I'll head over to https://www.microsoft.com/net/download/dotnet-core/2.2 and get .NET Core 2.2 Preview 3. I'm building on Windows but I may want to update my Linux (WSL) install and Docker images later.

All right, installed. Check it with dotnet --version to confirm it's correct:

C:\Users\scott> dotnet --version
2.2.100-preview3-009430

Let's try to build my podcast website. Note that it consists of two projects, the main website on ASP.NET Core, and Unit Tests with XUnit and Selenium.

D:\github\hanselminutes-core [main ≡]> dotnet build
Microsoft (R) Build Engine version 15.9.8-preview+g0a5001fc4d for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restoring packages for D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj...
Restore completed in 80.05 ms for D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj.
Restore completed in 25.4 ms for D:\github\hanselminutes-core\hanselminutes.core\hanselminutes-core.csproj.
D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj : error NU1605: Detected package downgrade: Microsoft.AspNetCore.App from 2.2.0-preview3-35497 to 2.2.0-preview2-35157. Reference the package directly from the project to select a different version. [D:\github\hanselminutes-core\hanselminutes-core.sln]

The dotnet build fails, which make sense, because it's saying hey, you're asking for 2.2 Preview 2 but I've got Preview 3 all ready for you!

Detected package downgrade: Microsoft.AspNetCore.App from 2.2.0-preview3-35497 to 2.2.0-preview2-35157

Let's see what "dotnet outdated" says about this!

dotnet outdated says there's a few packages I need to update

Cool! I love these dependency tools and the community around them. You can see that it's noticed the Preview 2 -> Preview 3 opportunity, as well as a few other smaller minor or patch version bumps.

I can run dotnet outdated -u to automatically update the references, but I'll want to treat the "reference" of "Microsoft.AspNetCore.App" a little differently and use implicit versioning. You don't want to include a specific version - as I did - for this package.

Per the docs for .NET Core 2.1 and up:

Remove the "Version" attribute on the package reference to Microsoft.AspNetCore.App. Projects which use <Project Sdk="Microsoft.NET.Sdk.Web"> do not need to set the version. The version will be implied by the target framework and selected to best match the way ASP.NET Core 2.1 works. (See below for more information.)

Doing this also fixes the build because it picks up the latest 2.2 SDK automatically! Now I'll run my Unit Tests (with code coverage) and see how it works. Cool all tests pass (including Selenium).

88% Code Coverage

It builds locally, will it build in Azure DevOps when I check it in to GitHub?

Azure DevOps

I added a .NET Core SDK installer step when I set up my Azure Dev Ops Pipeline. This is where I'm explicitly installing a Preview version of the .NET Core SDK.

While I'm in here I noticed the Azure DevOps pipeline was using NuGet 4.4.1. I run "nuget update -self" on my local machine and got 4.7.1, so I updated that version as well to make the CI/CD pipeline reflect my own machine.

Now I'll git add, git commit (using verified/signed GitHub commits with my PGP Key and Yubikey):

D:\github\hanselminutes-core [main ≡ +0 ~2 -0 !]> git add .
D:\github\hanselminutes-core [main ≡ +0 ~2 -0 ~]> git commit -m "bump to 2.2 Preview 3"
[main 7a84bc7] bump to 2.2 Preview 3
2 files changed, 16 insertions(+), 13 deletions(-)

Add in a Git Push...and I can see the build start in Azure DevOps:

CI/CD pipeline build starting

Cool. While that's building, I'll make sure my existing Azure App Service (website) installation is ready to receive the deployment (assuming the build succeeds). Since I'm using an ASP.NET Core Preview build I'll want to make sure I have the Preview Site Extension installed, per the docs.

If I visit the Site Extensions menu item in the Azure Portal I can see I've got .NET Core 2.2 Preview 2, but there's an update available, as expected.

Update Available

I'll click this extension and then click Update. This extension's job is to make sure the App Service gets Preview versions of the .NET Core SDK. Only released (GA - general availability) SDKs are installed by default.

OK, .NET Core 2.2 is all updated in Azure, so I'll confirm that it's deployed as well in Azure DevOps. Yes, I'm deploying into Production without a net. Seriously, though, if there is an issue I'll just rollback. If I was deeply serious about downtime I'd be doing all this in Staging.

image

Successful local test, successful CI/SD build and test, successful deployment, and the site is back up now running on ASP.NET Core 2.2 Preview 3. It took about 45 min to do the work while simultaneously taking these screenshots and writing this blog post during the slow parts.

Good night everyone!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

.NET Core and .NET Standard for IoT - The potential of the Meadow Kickstarter

Nov 1, 2018

Description:

I saw this Kickstarter today - Meadow: Full-stack .NET Standard IoT platform. It says that "It combines the best of all worlds; it has the power of RaspberryPi, the computing factor of an Arduino, and the manageability of a mobile app. And best part? It runs full .NET Standard on real IoT hardware."

NOTE: I don't have any relationship with the company/people behind this Kickstarter, but it seems interesting so I'm sharing it with you. As with all Kickstarters, it's not a pre-order, it's an investment that may not pan out, so always be prepared to lose your investment. I lost mine with the .NET "Agent" SmartWatch even though all signs pointed to success.

Meadow IoT KickstarterI've done IoT work on Raspberry Pis which is much easier lately with the emerging community support for ARM32, Raspberry Pis, and cool stuff happening on Windows 10 IoT. I've written on how easy it is to get running on Raspberry Pi. I was even able to get my own podcast website running on Raspberry Pi and in Docker.

This Meadow Kickstarter says it's running on the Mono Runtime and will support the .NET Standard 2.0 API. That means that you likely already know how to program to it. Most libraries on NuGet are .NET Standard compliant so a ton of open source software should "Just Work" on any solution that supports .NET Standard.

One thing that seems interesting about Meadow is this sentence: "The power of Raspberry Pi in the computing factor of an Arduino, and the manageability of a mobile app."

Raspberry Pis are full-on computers. Ardunios are small little (mostly) single-tasked devices. Microcomputer vs Microcontroller. It's overkill to have Ubuntu on a computer just to turn on a device. You usually want IoT devices to have as small a surface area as possible.

Meadow says "Meadow has been designed to run on a variety of microcontrollers, and our first board is based on STMicroelectronics' flagship STM32F7 MCU. The Meadow F7 Micro board is an embeddable module that's based on Adafruit Feather form factor." Remember, we are talking megs not gigs here. "We've paired the STM32F7 with 32MB of flash storage and 16MB of RAM, so you can run pretty much anything you can think of building." This is just a 216MHz ARM board.

Be sure to scroll all the way down to the bottom of the page as they outline risks as well as what's left to be done.

What do you think? While you are at it, check out (total coincidence) our sponsor this week is Intel IoT! They have some great developer kits.

Sponsor: Reduce time to market and simplify IOT development using developer kits built on Intel Atom®, Intel® Core™ and Intel® Xeon® processors and tools such as Intel® System Studio and Arduino Create*
© 2018 Scott Hanselman. All rights reserved.
     

Side by Side user scoped .NET Core installations on Linux with dotnet-install.sh

Oct 30, 2018

Description:

I can run .NET Core on Windows, Mac, or a dozen Linuxes. On my Ubuntu installation I can check what version I have installed and where it is like this:

$ dotnet --version 2.1.403 $ which dotnet /usr/bin/dotnet

If we interrogate that dotnet file we see it's a link to elsewhere:

$ ls -alogF /usr/bin/dotnet lrwxrwxrwx 1 22 Sep 19 03:10 /usr/bin/dotnet -> ../share/dotnet/dotnet*

If we head over there we see similar stuff as we do on Windows.

Side by side DotNet installs

Basically c:\program files\dotnet is the same as /share/dotnet.

$ cd ../share/dotnet $ ll total 136 drwxr-xr-x 1 root root 4096 Oct 5 19:47 ./ drwxr-xr-x 1 root root 4096 Aug 1 17:44 ../ drwxr-xr-x 1 root root 4096 Feb 13 2018 additionalDeps/ -rwxr-xr-x 1 root root 105704 Sep 19 03:10 dotnet* drwxr-xr-x 1 root root 4096 Feb 13 2018 host/ -rw-r--r-- 1 root root 1083 Sep 19 03:10 LICENSE.txt drwxr-xr-x 1 root root 4096 Oct 5 19:48 sdk/ drwxr-xr-x 1 root root 4096 Aug 1 18:07 shared/ drwxr-xr-x 1 root root 4096 Feb 13 2018 store/ -rw-r--r-- 1 root root 27700 Sep 19 03:10 ThirdPartyNotices.txt $ ls sdk 2.1.4 2.1.403 NuGetFallbackFolder $ ls shared Microsoft.AspNetCore.All Microsoft.AspNetCore.App Microsoft.NETCore.App $ ls shared/Microsoft.NETCore.App/ 2.0.5 2.1.5

Looking in directories works to figure out what SDKs and Runtime versions are installed, but the best way is to use the dotnet cli itself like this. 

$ dotnet --list-sdks 2.1.4 [/usr/share/dotnet/sdk] 2.1.403 [/usr/share/dotnet/sdk] $ dotnet --list-runtimes Microsoft.AspNetCore.All 2.1.5 [/usr/share/dotnet/shared/Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.5 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.0.5 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 2.1.5 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

There's great instructions on how to set up .NET Core on your Linux machines via Package Manager here.

Note that these installs of the .NET Core SDK are installed in /usr/share. I can use the dotnet-install.sh to do non-admin installs in my own user directory.

In order to gain more control and do things more manually, you can use this shell script here: https://dot.net/v1/dotnet-install.sh and its documentation is here at docs. For Windows there is also a PowerShell version https://dot.net/v1/dotnet-install.ps1

The main usefulness of these scripts is in automation scenarios and non-admin installations. There are two scripts: One is a PowerShell script that works on Windows. The other script is a bash script that works on Linux/macOS. Both scripts have the same behavior. The bash script also reads PowerShell switches, so you can use PowerShell switches with the script on Linux/macOS systems.

For example, I can see all the current .NET Core 2.1 versions at https://www.microsoft.com/net/download/dotnet-core/2.1 and 2.2 at https://www.microsoft.com/net/download/dotnet-core/2.2 - the URL format is regular. I can see from that page that at the time of this blog post, v2.1.5 is both Current (most recent stable) and also LTS (Long Term Support).

I'll grab the install script and chmod +x it. Running it with no options will get me the latest LTS release.

$ wget https://dot.net/v1/dotnet-install.sh --2018-10-31 15:41:08-- https://dot.net/v1/dotnet-install.sh Resolving dot.net (dot.net)... 104.214.64.238 Connecting to dot.net (dot.net)|104.214.64.238|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 30602 (30K) [application/x-sh] Saving to: ‘dotnet-install.sh’

I like the "-DryRun" option because it will tell you what WILL happen without doing it.

$ ./dotnet-install.sh -DryRun dotnet-install: Payload URL: https://dotnetcli.azureedge.net/dotnet/Sdk/2.1.403/dotnet-sdk-2.1.403-linux-x64.tar.gz dotnet-install: Legacy payload URL: https://dotnetcli.azureedge.net/dotnet/Sdk/2.1.403/dotnet-dev-ubuntu.16.04-x64.2.1.403.tar.gz dotnet-install: Repeatable invocation: ./dotnet-install.sh --version 2.1.403 --channel LTS --install-dir <auto>

If I use the dotnet-install script can have multiple copies of the .NET Core SDK installed in my user folder at ~/.dotnet. It all depends on your PATH. Note this as I use ~/.dotnet for my .NET Core install location and run dotnet --list-sdks. Make sure you know what your PATH is and that you're getting the .NET Core you expect for your user.

$ which dotnet /usr/bin/dotnet $ export PATH=/home/scott/.dotnet:$PATH $ which dotnet /home/scott/.dotnet/dotnet $ dotnet --list-sdks 2.1.402 [/home/scott/.dotnet/sdk]

Now I will add a few more .NET Core SDKs side-by-side with the dotnet-install.sh script. Remember again, these aren't .NET's installed with apt-get which would be system level and by run with sudo. These are user profile installed versions.

There's really no reason to do side by side at THIS level of granularity, but it makes the point.

$ dotnet --list-sdks 2.1.302 [/home/scott/.dotnet/sdk] 2.1.400 [/home/scott/.dotnet/sdk] 2.1.401 [/home/scott/.dotnet/sdk] 2.1.402 [/home/scott/.dotnet/sdk] 2.1.403 [/home/scott/.dotnet/sdk]

When you're doing your development, you can use "dotnet new globaljson" and have each path/project request a specific SDK version.

$ dotnet new globaljson The template "global.json file" was created successfully. $ cat global.json { "sdk": { "version": "2.1.403" } }

Hope this helps!

Sponsor: Reduce time to market and simplify IOT development using developer kits built on Intel Atom®, Intel® Core™ and Intel® Xeon® processors and tools such as Intel® System Studio and Arduino Create*


© 2018 Scott Hanselman. All rights reserved.
     

ASP.NET Core 2.2 Parameter Transformers for clean URL generation and slugs in Razor Pages or MVC

Oct 25, 2018

Description:

I noticed that last week .NET Core 2.2 Preview 3 was released:

.NET Core 2.2 Preview 3 ASP.NET Core 2.2 Preview 3 Entity Framework 2.2 Preview 3

You can download and get started with .NET Core 2.2, on Windows, macOS, and Linux: .NET Core 2.2 Preview 3 SDK (includes the runtime) .NET Core 2.2 Preview 3 Runtime

Docker images are available at microsoft/dotnet for .NET Core and ASP.NET Core.

.NET Core 2.2 Preview 3 can be used with Visual Studio 15.9 Preview 3 (or later), Visual Studio for Mac and Visual Studio Code.

The feature I am most stoked about in ASP.NET 2.2 is a subtle one but I remember implementing it manually many times over the last 10 years. I'm happy to see it nicely integrated into ASP.NET Core's MVC and Razor Pages patterns.

ASP.NET Core 2.2 introduces the concept of Parameter Transformers to routing. Remember there isn't a directly relationship between what's in the URL/Address bar and what's on disk. The routing subsystem handles URLs coming in from the client and routes them to Controllers, but it also generates URLs (strings) when given an Controller and Action.

So if I'm using Razor Pages and I have a file Pages/FancyPants.cshtml I can get to it by default at /FancyPants. I can also "ask" for the URL when I'm creating anchors/links in my Razor Page:

<a class="nav-link text-dark" asp-area="" asp-page="/fancypants">Fancy Pants</a>

Here I'm asking for the page. That asp-page attribute points to a logical page, not a physical file.

 

We can make an IOutboundParameterTransformer that changes URLs to a format (for example) like a WordPress standard slug in the two-words format.

public class SlugifyParameterTransformer : IOutboundParameterTransformer { public string TransformOutbound(object value) { if (value == null) { return null; } // Slugify value return Regex.Replace(value.ToString(), "([a-z])([A-Z])", "$1-$2").ToLower(); } }

Then you let the ASP.NET Pipeline know about this transformer, either in Razor Pages...

services.AddMvc() .SetCompatibilityVersion(CompatibilityVersion.Version_2_2) .AddRazorPagesOptions(options => { options.Conventions.Add( new PageRouteTransformerConvention( new SlugifyParameterTransformer())); });

or in ASP.NET MVC:

services.AddMvc(options => { options.Conventions.Add(new RouteTokenTransformerConvention( new SlugifyParameterTransformer())); });

Now when I run my application, I get my routing both coming in (from the client web browser) and going out (generated via Razor pages. Here I'm hovering over the "Fancy Pants" link at the top of the page. Notice that it's generated /fancy-pants as the URL.

image

So that same code from above that generates anchor tags <a href= gives me the expected new style of URL, and I only need to change it in one location.

There is also a new service called LinkGenerator that's a singleton you can call outside the context of an HTTP call (without an HttpContext) in order to generate a URL string.

return _linkGenerator.GetPathByAction( httpContext, controller: "Home", action: "Index", values: new { id=42 });

This can be useful if you are generating URLs outside of Razor or in some Middleware. There's a lot more little subtle improvements in ASP.NET Core 2.2, but this was the one that I will find the most useful in the near term.

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Dependabot for .NET Core dependency tracking in GitHub

Oct 22, 2018

Description:

Bump Microsoft.ApplicationInsights.AspNetCore from 2.5.0-beta1 to 2.5.0-beta2I've been exploring automated dependency tracking lately. I usually use my podcast's ASP.NET Core website that I host on Github as a guinea pig. I tried Nukeeper and the dotnet outdated global tool - both of which are fantastic and worth exploring.

This week I'm trying Dependbot. I have no relationship with this company. Public repos and personal account repos are free and their pricing is very clear and organization accounts start at just $15 with a free trial.

I'm really impressed with how clever Dependabot is. It's almost like a person in its behavior. Yes, I realize that's kind of the point, but it's no less surprising to see. A well-written bot is a joy to behold.

For example, here is a PR (Pull Request) where Dependbot says "Bumps Microsoft.ApplicationInsights.AspNetCore from 2.5.0-beta1 to 2.5.0-beta2."

Basic stuff, right? But that's not all.

It not only does the basics where it noticed that a version bump occurred in a NuGet package, but it also copied the release notes from that NuGet package's release on GitHub! It included links to what was fixed between versions, links to the change logs, AND a complete linked commit list. I mean, that's just lovely.

A few days later, Dependabot went and closed the PR because the dependancy had updated (I was slow) then it commented telling me this PR was superseded by another.

Superseded by #20

Dependabot, like any good bot, also includes commands you can send to it via "Chats" in GitHub PR comments. You can tell it to use specific labels, control milestones. You can also control behavior in the Dependabot Dashboard and have it automerge things like minor versions, or just lock things down to security-only updates.

All in all, it's a very smart bot that supports basically all the languages. .NET support is in Beta, but I haven't had any issues with it. You should definitely check it out. And let me tell you, once you've got everything automated you'll wonder how you ever managed before.

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Customer Notes: Diagnosing issues under load of Web API app migrated to ASP.NET Core on Linux

Oct 16, 2018

Description:

When the engineers on the ASP.NET/.NET Core team talk to real customers about actual production problems they have, interesting stuff comes up. I've tried to capture a real customer interaction here without giving away their name or details.

The team recently had the opportunity to help a large customer of .NET investigate performance issues they’ve been having with a newly-ported ASP.NET Core 2.1 app when under load. The customer's developers are experienced with ASP.NET on Windows but in this case they needed help getting started with performance investigations with ASP.NET Core in Linux containers.

As with many performance investigations, there were a variety of issues contributing to the slowdowns, but the largest contributors were time spent garbage collecting (due to unnecessary large object allocations) and blocking calls that could be made asynchronous.

After resolving the technical and architectural issues detailed below, the customer's Web API went from only being able to handle several hundred concurrent users during load testing to being able to easily handle 3,000 and they are now running the new ASP.NET Core version of their backend web API in production. Problem Statement

The customer recently migrated their .NET Framework 4.x ASP.NET-based backend Web API to ASP.NET Core 2.1. The migration was broad in scope and included a variety of tech changes.

Their previous version Web API (We'll call it version 1) ran as an ASP.NET application (targeting .NET Framework 4.7.1) under IIS on Windows Server and used SQL Server databases (via Entity Framework) to persist data. The new (2.0) version of the application runs as an ASP.NET Core 2.1 app in Linux Docker containers with PostgreSQL backend databases (via Entity Framework Core). They used Nginx to load balance between multiple containers on a server and HAProxy load balancers between their two main servers. The Docker containers are managed manually or via Ansible integration for CI/CD (using Bamboo).

Although the new Web API worked well functionally, load tests began failing with only a few hundred concurrent users. Based on current user load and projected growth, they wanted the web API to support at least 2,000 concurrent users. Load testing was done using Visual Studio Team Services load tests running a combination of web tests mimicking users logging in, doing the stuff of their business, activating tasks in their application, as well as pings that the Mobile App's client makes regularly to check for backend connectivity. This customer also uses New Relic for application telemetry and, until recently, New Relic agents did not work with .NET Core 2.1. Because of this, there was unfortunately no app diagnostic information to help pinpoint sources of slowdowns. Lessons Learned Cross-Platform Investigations

One of the most interesting takeaways for me was not the specific performance issues encountered but, instead, the challenges this customer had working in a Linux environment. The team's developers are experienced with ASP.NET on Windows and comfortable debugging in Visual Studio. Despite this, the move to Linux containers has been challenging for them.

Because the engineers were unfamiliar with Linux, they hired a consultant to help deploy their Docker containers on Linux servers. This model worked to get the site deployed and running, but became a problem when the main backend began exhibiting performance issues. The performance problems only manifested themselves under a fairly heavy load, such that they could not be reproduced on a dev machine. Up until this investigation, the developers had never debugged on Linux or inside of a Docker container except when launching in a local container from Visual Studio with F5. They had no idea how to even begin diagnosing issues that only reproduced in their staging or production environments. Similarly, their dev-ops consultant was knowledgeable about Linux infrastructure but not familiar with application debugging or profiling tools like Visual Studio.

The ASP.NET team has some documentation on using PerfCollect and PerfView to gather cross-platform diagnostics, but the customer's devs did not manage to find these docs until they were pointed out. Once an ASP.NET Core team engineer spent a morning showing them how to use PerfCollect, LLDB, and other cross-platform debugging and performance profiling tools, they were able to make some serious headway debugging on their own. We want to make sure everyone can debug .NET Core on Linux with LLDB/SOS or remotely with Visual Studio as easily as possible.

The ASP.NET Core team now believes they need more documentation on how to diagnose issues in non-Windows environments (including Docker) and the documentation that already exists needs to be more discoverable. Important topics to make discoverable include PerfCollect, PerfView, debugging on Linux using LLDB and SOS, and possibly remote debugging with Visual Studio over SSH. Issues in Web API Code

Once we gathered diagnostics, most of the perf issues ended up being common problems in the customer’s code.  The largest contributor to the app’s slowdown was frequent Generation 2 (Gen 2) GCs (Garbage Collections) which were happening because a commonly-used code path was downloading a lot of images (product images), converting those bytes into a base64 strings, responding to the client with those strings, and then discarding the byte[] and string. The images were fairly large (>100 KB), so every time one was downloaded, a large byte[] and string had to be allocated. Because many of the images were shared between multiple clients, we solved the issue by caching the base64 strings for a short period of time (using IMemoryCache). HttpClient Pooling with HttpClientFactory When calling out to Web APIs there was a pattern of creating new HttpClient instances rather than using IHttpClientFactory to pool the clients. Despite implementing IDisposable, it is not a best practice to dispose HttpClient instances as soon as they’re out of scope as they will leave their socket connection in a TIME_WAIT state for some time after being disposed. Instead, HttpClient instances should be re-used. Additional investigation showed that much of the application’s time was spent querying PostgresSQL for data (as is common). There were several underlying issues here. Database queries were being made in a blocking way instead of being asynchronous. We helped address the most common call-sites and pointed the customer at the AsyncUsageAnalyzer to identify other async cleanup that could help. Database connection pooling was not enabled. It is enabled by default for SQL Server, but not for PostgreSQL. We re-enabled database connection pooling. It was necessary to have different pooling settings for the common database (used by all requests) and the individual shard databases which are used less frequently. While the common database needs a large pool, the shard connection pools need to be small to avoid having too many open, idle connections. The Web API had a fairly ‘chatty’ interface with the database and made a lot of small queries. We re-worked this interface to make fewer calls (by querying more data at once or by caching for short periods of time). There was also some impact from having other background worker containers on the web API’s servers consuming large amounts of CPU. This led to a ‘noisy neighbor’ problem where the web API containers didn’t have enough CPU time for their work. We showed the customer how to address this with Docker resource constraints. Wrap Up

As shown in the graph below, at the end of our performance tuning, their backend was easily able to handle 3,000 concurrent users and they are now using their ASP.NET Core solution in production. The performance issues they saw overlapped a lot with those we’ve seen from other customers (especially the need for caching and for async calls), but proved to be extra challenging for the developers to diagnose due to the lack of familiarity with Linux and Docker environments.

Performance and Errors Charts look good, up and to the rightThroughput and Tests Charts look good, up and to the right

Some key areas of focus uncovered by this investigation were:

Being mindful of memory allocations to minimize GC pause times

Keeping long-running calls non-blocking/asynchronous

Minimizing calls to external resources (such as other web services or the database) with caching and grouping of requests

Hope you find this useful! Big thanks to Mike Rousos from the ASP.NET Core team for his work and analysis!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

New prescriptive guidance for Open Source .NET Library Authors

Oct 16, 2018

Description:

Open-source library guidanceThere's a great new bunch of guidance just published representing Best Practices for creating .NET Libraries. Best of all, it was shepherded by JSON.NET's James Newton-King. Who better to help explain the best way to build and publish a .NET library than the author of the world's most popular open source .NET library?

Perhaps you've got an open source (OSS) .NET Library on your GitHub, GitLab, or Bitbucket. Go check out the open-source library guidance.

These are the identified aspects of high-quality open-source .NET libraries: Inclusive - Good .NET libraries strive to support many platforms and applications. Stable - Good .NET libraries coexist in the .NET ecosystem, running in applications built with many libraries. Designed to evolve - .NET libraries should improve and evolve over time, while supporting existing users. Debuggable - .NET libraries should use the latest tools to create a great debugging experience for users. Trusted - .NET libraries have developers' trust by publishing to NuGet using security best practices.

The guidance is deep but also preliminary. As with all Microsoft Documentation these days it's open source in Markdown and on GitHub. If you've got suggestions or thoughts, share them! Be sure to sound off in the Feedback Section at the bottom of the guidance. James and the Team will be actively incorporating your thoughts.

Cross-platform targeting

Since the whole point of .NET Core and the .NET Standard is reuse, this section covers how and why to make reusable code but also how to access platform-specific APIs when needed with multi-targeting.

Strong naming

Strong naming seemed like a good idea but you should know WHY and WHEN to strong name. It all depends on your use case! Are you publishing internally or publically? What are your dependencies and who depends on you?

NuGet

When publishing on the NuGet public repository (or your own private/internal one) what do you need to know about SemVer 2.0.0? What about pre-release packages? Should you embed PDBs for easier debugging? Consider things like Dependencies, SourceLink, how and where to Publish and how Versioning applies to you and when (or if) you cause Breaking changes.

Also be sure to check out Immo's video on "Building Great Libraries with .NET Standard" on YouTube!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

C# and .NET Core scripting with the "dotnet-script" global tool

Oct 12, 2018

Description:

dotnet scriptYou likely know that open source .NET Core is cross platform and it's super easy to do "Hello World" and start writing some code.

You just install .NET Core, then "dotnet new console" which will generate a project file and basic app, then "dotnet run" will compile and run your app? The 'new' command will create all the supporting code, obj, and bin folders, etc. When you do "dotnet run" it actually is a combination of "dotnet build" and "dotnet exec whatever.dll."

What could be easier?

What about .NET Core as scripting?

Check out dotnet script:

C:\Users\scott\Desktop\scriptie> dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
C:\Users\scott\Desktop\scriptie>copy con helloworld.csx
Console.WriteLine("Hello world!");
^Z
1 file(s) copied.
C:\Users\scott\Desktop\scriptie>dotnet script helloworld.csx
Hello world!

NOTE: I was a little tricky there in step two. I did a "copy con filename" to copy from the console to the destination file, then used Ctrl-Z to finish the copy. Feel free to just use notepad or vim. That's not dotnet-script-specific, that's Hanselman-specific.

Pretty cool eh? If you were doing this in Linux or OSX you'll need to include a "shebang" as the first line of the script. This is a standard thing for scripting files like bash, python, etc.

#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");

This lets the operating system know what scripting engine handles this file.

If you you want to refer to a NuGet package within a script (*.csx) file, you'll use the Roslyn #r syntax:

#r "nuget: AutoMapper, 6.1.0"
Console.WriteLine("whatever);

Even better! Once you have "dotnet-script" installed as a global tool as above:

dotnet tool install -g dotnet-script

You can use it as a REPL! Finally, the C# REPL (Read Evaluate Print Loop) I've been asking for for only a decade! ;)

C:\Users\scott\Desktop\scriptie>dotnet script
> 2+2
4
> var x = "scott hanselman";
> x.ToUpper()
"SCOTT HANSELMAN"

This is super useful for a learning tool if you're teaching C# in a lab/workshop situation. Of course you could also learn using http://try.dot.net in the browser as well.

In the past you may have used ScriptCS for C# scripting. There's a number of cool C#/F# scripting options. This is certainly not a new thing:

Mono scripting ScriptCS (Filip was core on this) Roslyn Scripting APIs (Roslyn is underneath most scripting environments) CS-Script

In this case, I was very impressed with the easy of dotnet-script as a global tool and it's simplicity. Go check out https://github.com/filipw/dotnet-script and try it out today!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Using Enhanced Mode Ubuntu 18.04 for Hyper-V on Windows 10

Oct 10, 2018

Description:

I run Windows as my daily driver but I use WSL (Windows Subsystem for Linux) all day long but WSL is just the command-line and has some perf issues with heavy file system work. I use Docker for Windows which works amazingly and has it good perf but sometimes I want to test on a full Ubuntu Desktop.

ASIDE: No joke. My Linux/Ubuntu bona fides go back a while. Here's me installing Ubuntu 10.4 on Windows 7 over 8 years ago. Umuntu ngumuntu ngabantu!

To be frank, historically Ubuntu has sucked on Window's Hyper-V. If you wanted to get a higher (read: usable) resolution it would take a miracle. If you wanted shared clipboards or shared disk drives, well, again, miracle or a ton of manual set up. It's possible but it's not fun.

Why can't it be easy? Well, it is. I installed the Windows 10 "Fall Creators Update" - yes the name is stupid. It's Windows 10 "1809" - that's 2018 and the 9th month. Just type "Winver" from the Start menu. You may have "1803" from March. Go update.

Windows 10 includes Hyper-V Quick Create which has this suspiciously short list under "Select an operating system." Anytime a list has 1 or 2 items and some whitespace that means it will someday have n+1 list items.

Recently Ubuntu 18.04.1 LTS showed up in this list. You can quickly and easily create an Ubuntu VM from here and it's all handled, downloading, network switch, VM create, etc.

Create Virtual Machine

I dig it. So click create, start it up...get to the set up screen. Now, here, make sure you click "Require my password to login." What we want to do won't work with "Log in Automatically" and you don't want that anyway.

Setting up an Ubuntu VM

After you've created your VM and got it mostly setup, close the Hyper-V client window. Just X it out. The VM is still running of course.

Go over to Hyper-V Manager and right click on it and "Connect."

Connect to VM

You'll see a resolution dialog...pick one! Go crazy! Do be aware that there are issues on 4k display but you can adjust within Ubuntu itself.

Set Resolution

Now, BEFORE you click Connect, click "Show Options" and then "Local Resources." Under here, uncheck Smart Cards and Check "Drives."

Uncheck Smart Cards and Check Drives

Click OK and Connect...and you get this weird dialog! You're actually RDP'ing into Ubuntu! Rather than using the historical weird Hyper-V Client stuff to talk to Ubuntu and struggle with video cards and resolutions, here you are literally just Remote Desktoping into Ubuntu using integrated open source xrdp!

Login with your name and password (remember before when I said don't automatically login? This is why.)

Login to xrdp

What about Dynamic Resizing?

Here's an even better possible future. What we REALLY want (don't we, Dear Reader) is Dynamic Resolution and Resizing without Reconnection! Today you can just close and reconnect to change resolutions but I'd love to just resize the Ubuntu window like I do Windows 7/8/10 VM client windows.

The feature "Dynamic resolution update" was introduced in RDP 8.1. It enables to resize screen resolution on-the-fly.

Since we are using xrdp and that's open source over https://github.com/neutrinolabs/xrdp/ AND there's even a issue about this AND a lovely person has the code in their own branch and agreed to possibly upstream it maybe we can start using it and this great feature will just light up for folks who use Hyper-V Quick Create. Certainly we're talking weeks and months here (unless you want to help) but the lion's share of the work is done. I'm looking forward to resizing Ubuntu VMs dynamically.

What's in Enhanced Mode Today?

Back to today! You can read about how Linux VMs (Ubuntu or Arch) are set up in this GitHub repo https://github.com/Microsoft/linux-vm-tools You can set them up yourself with scripts, but the nice thing about Hyper-V Quick Create is that the work is done for us to make these "enhanced session" RDP-friendly VMs. No need to fear, you can just read the scripts yourself.

I can connect quickly and Enhanced Mode VMs give me:

a shared clipboard the resolution of my choice on connect fast painting/video/scrolling automatic shared-drives Smooth and automatic mouse capture

Fantastic.

Ubuntu on Windows 10

What about installing Visual Studio Code? Of course. And also .NET Core.NET Core, natch.

image

This took like 10 min and 8 of it was waiting for Hyper-V Create to download Ubuntu. Try it out!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Troubleshooting Windows 10 Nearby Sharing and Bluetooth Antennas

Oct 5, 2018

Description:

wifi

When building my Ultimate Developer PC I picked this motherboard, and it's lovely.

ASUS ROG STRIX LGA2066 X299 ATX Motherboard - Good solid board with built in BT and Wifi, an M.2 heatsink included, 3x PCIe 3.0 x16 SafeSlots (supports triple @ x16/x16/x8), 1x PCIe 3.0 x4, 2x PCIe 3.0 x1 and a Max of 128 gigs of RAM. It also has 8x USB 3.1s and a USB C which is nice.

I put it all together and I've thrilled with the machine. However, recently I was trying to use the new Windows 10 "Nearby Devices" feature.

It's this cool feature that lets you share stuff to "Nearby Devices" - that means your laptop, other desktops, whatever. Similar to AirDrop, it solves that problem of moving stuff between devices without using an intermediate server.

You can turn it on in Settings on Windows 10 and decide if you want to receive data from everyone or just contacts.

Nearby Sharing

So I started using on my new Desktop, IRONHEART, but I kept getting this "Looking for nearby devices" dialog...and it would just do nothing.

Looking for Nearby Devices

It turns out that the ASUS Motherboard also comes with a Wi-Fi Antenna. I don't use Wifi (I'm wired) so I didn't bother attaching it. It seems that this antenna is also a Bluetooth antenna and if you plug it in you'll ACTUALLY GET A LOVELY BLUETOOTH SIGNAL. Who knew? ;)

Now I can easily right click on files in Explorer or Web Pages in Edge and transfer them between systems.

Sharing a file with Nearby Sharing

A few tips on Nearby Sharing

Make sure you know your visibility settings. From the Start Menu type "nearby sharing" and confirm them. Make sure the receiving device doesn't have "Focus Assist" on (via the Action Center in the lower right of the screen) or you might miss the notification. And if you're using a desktop like me, ahem, plug in your BT antenna

Hope this helps someone because Nearby Sharing is a great feature that I'm now using all the time.

Sponsor: Telerik DevCraft is the comprehensive suite of .NET and JavaScript components and productivity tools developers use to build high-performant, modern web, mobile, desktop apps and chatbots. Try it!
© 2018 Scott Hanselman. All rights reserved.
     

Headless CMS and Decoupled CMS in .NET Core

Oct 3, 2018

Description:

Headless by Wendy used under CC https://flic.kr/p/HkESxWI'm sure I'll miss some, so if I do, please sound off in the comments and I'll update this post over the next week or so!

Lately I've been noticing a lot of "Headless" CMSs (Content Management System). A ton, in fact. I wanted to explore this concept and see if it's a fad or if it's really something useful.

Given the rise of clean RESTful APIs has come the rise of Headless CMS systems. We've all evaluated CMS systems (ones that included both front- and back-ends) and found the front-end wanting. Perhaps it lacks flexibility OR it's way too flexible and overwhelming. In fact, when I wrote my podcast website I considered a CMS but decided it felt too heavy for just a small site.

A Headless CMS is a back-end only content management system (CMS) built from the ground up as a content repository that makes content accessible via a RESTful API for display on any device.

I could start with a database but what if I started with a CMS that was just a backend - a headless CMS. I'll handle the front end, and it'll handle the persistence.

Here's what I found when exploring .NET Core-based Headless CMSs. One thing worth noting, is that given Docker containers and the ease with which we can deploy hybrid systems, some of these solutions have .NET Core front-ends and "who cares, it returns JSON" for the back-end!

Lynicon

Lyncicon is literally implemented as a NuGet Library! It stores its data as structured JSON. It's built on top of ASP.NET Core and uses MVC concepts and architecture.

It does include a front-end for administration but it's not required. It will return HTML or JSON depending on what HTTP headers are sent in. This means you can easily use it as the back-end for your Angular or existing SPA apps.

Lyncion is largely open source at https://github.com/jamesej/lyniconanc. If you want to take it to the next level there's a small fee that gives you updated searching, publishing, and caching modules.

ButterCMS

ButterCMS is an API-based CMS that seamlessly integrates with ASP.NET applications. It has an SDK that drops into ASP.NET Core and also returns data as JSON. Pulling the data out and showing it in a few is easy.

public class CaseStudyController : Controller { private ButterCMSClient Client; private static string _apiToken = ""; public CaseStudyController() { Client = new ButterCMSClient(_apiToken); } [Route("customers/{slug}")] public async Task<ActionResult> ShowCaseStudy(string slug) { var json = await Client.ListPageAsync("customer_case_study", slug) dynamic page = ((dynamic)JsonConvert.DeserializeObject(json)).data.fields; ViewBag.SeoTitle = page.seo_title; ViewBag.FacebookTitle = page.facebook_open_graph_title; ViewBag.Headline = page.headline; ViewBag.CustomerLogo = page.customer_logo; ViewBag.Testimonial = page.testimonial; return View("Location"); } }

Then of course output into Razor (or putting all of this into a RazorPage) is simple:

<html> <head> <title>@ViewBag.SeoTitle</title> <meta property="og:title" content="@ViewBag.FacebookTitle" /> </head> <body> <h1>@ViewBag.Headline</h1> <img width="100%" src="@ViewBag.CustomerLogo"> <p>@ViewBag.Testimonial</p> </body> </html>

Butter is a little different (and somewhat unusual) in that their backend API is a SaaS (Software as a Service) and they host it. They then have SDKs for lots of platforms including .NET Core. The backend is not open source while the front-end is https://github.com/ButterCMS/buttercms-csharp.

Piranha CMS

Piranha CMS is built on ASP.NET Core and is open source on GitHub. It's also totally package-based using NuGet and can be easily started up with a dotnet new template like this:

dotnet new -i Piranha.BasicWeb.CSharp dotnet new piranha dotnet restore dotnet run

It even includes a new Blog template that includes Bootstrap 4.0 and is all set for customization. It does include optional lightweight front-end but you can use those as guidelines to create your own client code. One nice touch is that Piranha also includes image resizing and cropping.

Umbraco Headless

The main ASP.NET website currently uses Umbraco as its CMS. Umbraco is a well-known open source CMS that will soon include a Headless option for more flexibility. The open source code for Umbraco is up here https://github.com/umbraco.

Orchard Core

Orchard is a CMS with a very strong community and fantastic documentation. Orchard Core is a redevelopment of Orchard using open source ASP.NET Core. While it's not "headless" it is using a Decoupled Architecture. Nothing would prevent you from removing the UI and presenting the content with your own front-end. It's also cross-platform and container friendly.

Squidex

"Squidex is an open source headless CMS and content management hub. In contrast to a traditional CMS Squidex provides a rich API with OData filter and Swagger definitions." Squidex is build with ASP.NET Core and the CQRS pattern and works with both Windows and Linux on today's browsers.

Squidex is open source with excellent docs at https://docs.squidex.io. Docs are at https://docs.squidex.io. They are also working on a hosted version you can play with here https://cloud.squidex.io. Samples on how to consume it are here https://github.com/Squidex/squidex-samples.

The consumption is super clean:

[Route("/{slug},{id}/")] public async Task<IActionResult> Post(string slug, string id) { var post = await apiClient.GetBlogPostAsync(id); var vm = new PostVM { Post = post }; return View(vm); }

And then the View:

@model PostVM @{ ViewData["Title"] = Model.Post.Data.Title; } <div> <h2>@Model.Post.Data.Title</h2> @Html.Raw(Model.Post.Data.Text) </div>

What .NET Core Headless CMSs did I miss? Let me know.

This definitely isn't a fad. It makes a lot of sense to me architecturally. Given the proliferation of "backend as a service" systems, DocumentDBs like Cosmos and Mongo, it follows that a headless CMS could easily fit into my systems. One less DB schema to think about, no need to roll my own auth/auth.

*Photo "headless" by Wendy used under CC https://flic.kr/p/HkESxW

Sponsor: Telerik DevCraft is the comprehensive suite of .NET and JavaScript components and productivity tools developers use to build high-performant, modern web, mobile, desktop apps and chatbots. Try it!


© 2018 Scott Hanselman. All rights reserved.
     

Exploring .NET Core's SourceLink - Stepping into the Source Code of NuGet packages you don't own

Sep 28, 2018

Description:

According to https://github.com/dotnet/sourcelink, SourceLink "enables a great source debugging experience for your users, by adding source control metadata to your built assets."

Sounds fantastic. I download a NuGet to use something like Json.NET or whatever all the time, I'd love to be able to "Step Into" the source even if I don't have laying around. Per the GitHub, it's both language and source control agnostic. I read that to mean "not just C# and not just GitHub."

Visual Studio 15.3+ supports reading SourceLink information from symbols while debugging. It downloads and displays the appropriate commit-specific source for users, such as from raw.githubusercontent, enabling breakpoints and all other sources debugging experience on arbitrary NuGet dependencies. Visual Studio 15.7+ supports downloading source files from private GitHub and Azure DevOps (former VSTS) repositories that require authentication.

Looks like Cameron Taggart did the original implementation and then the .NET team worked with Cameron and the .NET Foundation to make the current version. Also cool.

Download Source and Continue Debugging

Let me see if this really works and how easy (or not) it is.

I'm going to make a little library using the 5 year old Pseudointernationalizer from here. Fortunately the main function is pretty pure and drops into a .NET Standard library neatly.

I'll put this on GitHub, so I will include "PublishRepositoryUrl" and "EmbedUntrackedSources" as well as including the PDBs. So far my CSPROJ looks like this:

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
<PublishRepositoryUrl>true</PublishRepositoryUrl>
<EmbedUntrackedSources>true</EmbedUntrackedSources>
<AllowedOutputExtensionsInPackageBuildOutputFolder>$(AllowedOutputExtensionsInPackageBuildOutputFolder);.pdb</AllowedOutputExtensionsInPackageBuildOutputFolder>
</PropertyGroup>
</Project>

Pretty straightforward so far. As I am using GitHub I added this reference, but if I was using GitLab or BitBucket, etc, I would use that specific provider per the docs.

<ItemGroup>
<PackageReference Include="Microsoft.SourceLink.GitHub" Version="1.0.0-beta-63127-02" PrivateAssets="All"/>
</ItemGroup>

Now I'll pack up my project as a NuGet package.

D:\github\SourceLinkTest\PsuedoizerCore [master ≡]> dotnet pack -c release
Microsoft (R) Build Engine version 15.8.166+gd4e8d81a88 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restoring packages for D:\github\SourceLinkTest\PsuedoizerCore\PsuedoizerCore.csproj...
Generating MSBuild file D:\github\SourceLinkTest\PsuedoizerCore\obj\PsuedoizerCore.csproj.nuget.g.props.
Restore completed in 96.7 ms for D:\github\SourceLinkTest\PsuedoizerCore\PsuedoizerCore.csproj.
PsuedoizerCore -> D:\github\SourceLinkTest\PsuedoizerCore\bin\release\netstandard2.0\PsuedoizerCore.dll
Successfully created package 'D:\github\SourceLinkTest\PsuedoizerCore\bin\release\PsuedoizerCore.1.0.0.nupkg'.

Let's look inside the .nupkg as they are just ZIP files. Ah, check out the generated *.nuspec file that's inside!

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
<metadata>
<id>PsuedoizerCore</id>
<version>1.0.0</version>
<authors>PsuedoizerCore</authors>
<owners>PsuedoizerCore</owners>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Package Description</description>
<repository type="git" url="https://github.com/shanselman/PsuedoizerCore.git" commit="35024ca864cf306251a102fbca154b483b58a771" />
<dependencies>
<group targetFramework=".NETStandard2.0" />
</dependencies>
</metadata>
</package>

See under repository it points back to the location AND commit hash for this binary! That means I can give it to you or a coworker and they'd be able to get to the source. But what's the consumption experience like? I'll go over and start a new Console app that CONSUMES my NuGet library package. To make totally sure that I don't accidentally pick up the source from my machine I'm going to delete the entire folder. This source code no longer exists on this machine.

I'm using a "local" NuGet Feed. In fact, it's just a folder. Check it out:

D:\github\SourceLinkTest\AConsumerConsole> dotnet add package PsuedoizerCore -s "c:\users\scott\desktop\LocalNuGetFeed"
Writing C:\Users\scott\AppData\Local\Temp\tmpBECA.tmp
info : Adding PackageReference for package 'PsuedoizerCore' into project 'D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj'.
log : Restoring packages for D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj...
info : GET https://api.nuget.org/v3-flatcontainer/psuedoizercore/index.json
info : NotFound https://api.nuget.org/v3-flatcontainer/psuedoizercore/index.json 465ms
log : Installing PsuedoizerCore 1.0.0.
info : Package 'PsuedoizerCore' is compatible with all the specified frameworks in project 'D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj'.
info : PackageReference for package 'PsuedoizerCore' version '1.0.0' added to file 'D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj'.

See how I used -s to point to an alternate source? I could also configure my NuGet feeds, be they local directories or internal servers with "dotnet new nugetconfig" and including my NuGet Servers in the order I want them searched.

Here is my little app:

using System;
using Utils;

namespace AConsumerConsole
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Pseudoizer.ConvertToFakeInternationalized("Hello World!"));
}
}
}

And the output is [Ħęľľő Ŵőřľđ! !!! !!!].

But can I step into it? I don't have the source remember...I'm using SourceLink.

In Visual Studio 2017 I confirm that SourceLink is enabled. This is the Portable PDB version of SourceLink, not the "SourceLink 1.0" that was "Enable Source Server Support." That only worked on Windows..

Enable Source Link Support

You'll also want to turn off "Just My Code" since, well, this isn't your code.

Disable Just My Code

Now I'll start a Debug Session in my consumer app and hit F11 to Step Into the Library whose source I do not have!

Source Link Will Download from The Internet

Fantastic. It's going to get the source for me! Without git cloning the repository it will seamlessly let me continue my debugging session.

The temporary file ended up in C:\Users\scott\AppData\Local\SourceServer\4bbf4c0dc8560e42e656aa2150024c8e60b7f9b91b3823b7244d47931640a9b9 if you're interested. I'm able to just keep debugging as if I had the source...because I do! It came from the linked source.

Debugging into a NuGet that I don't have the source for

Very cool. I'm going to keep digging into SourceLink and learning about it. It seems that if YOU have a library or published NuGet either inside your company OR out in the open source world that you absolutely should be using SourceLink.

You can even install the sourcelink global tool and test your .pdb files for greater insight.

D:\github\SourceLinkTest\PsuedoizerCore>dotnet tool install --global sourcelink
D:\github\SourceLinkTest\PsuedoizerCore\bin\release\netstandard2.0>sourcelink print-urls PsuedoizerCore.pdb
43c83e7173f316e96db2d8345a3f963527269651 sha1 csharp D:\github\SourceLinkTest\PsuedoizerCore\Psuedoizer.cs
https://raw.githubusercontent.com/shanselman/PsuedoizerCore/02c09baa8bfdee3b6cdf4be89bd98c8157b0bc08/Psuedoizer.cs
bfafbaee93e85cd2e5e864bff949f60044313638 sha1 csharp C:\Users\scott\AppData\Local\Temp\.NETStandard,Version=v2.0.AssemblyAttributes.cs
embedded

Think about how much easier consumers of your library will have it when debugging their apps! Your package is no longer a black box. Go set this up on your projects today.

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

A command-line REPL for RESTful HTTP Services

Sep 25, 2018

Description:

HTTP REPLMy that's a lot of acronyms. REPL means "Read Evaluate Print Loop." You know how you can run "python" and then just type 2+2 and get answer? That's a type of REPL.

The ASP.NET Core team is building a REPL that lets you explore and interact with your RESTful services. Ideally your services will have Swagger/OpenAPI available that describes the service. Right now this Http-REPL is just being developed and they're aiming to release it as a .NET Core Global Tool in .NET Core 2.2.

You can install global tools like this:

dotnet tool install -g nyancat

Then you can run "nyancat." Get a list of installed tools like this:

C:\Users\scott> dotnet tool list -g Package Id Version Commands -------------------------------------------------------------------- altcover.global 3.5.560 altcover dotnet-depends 0.1.0 dotnet-depends dotnet-httprepl 2.2.0-preview3-35304 dotnet-httprepl dotnet-outdated 2.0.0 dotnet-outdated dotnet-search 1.0.0 dotnet-search dotnet-serve 1.0.0 dotnet-serve git-status-cli 1.0.0 git-status github-issues-cli 1.0.0 ghi nukeeper 0.7.2 NuKeeper nyancat 1.0.0 nyancat project2015to2017.cli 1.8.1 csproj-to-2017

For the HTTP-REPL, since it's not yet released you have to point the Tool Feed to a daily build location, so do this:

dotnet tool install -g --version 2.2.0-* --add-source https://dotnet.myget.org/F/dotnet-core/api/v3/index.json dotnet-httprepl

Then run it with "dotnet httprepl." I'd like another name? What do you think? RESTy? POSTr? API Test? API View?

Here's an example run where I start up a Web API.

C:\SwaggerApp> dotnet httprepl (Disconnected)~ set base http://localhost:65369 Using swagger metadata from http://localhost:65369/swagger/v1/swagger.json http://localhost:65369/~ dir . [] People [get|post] Values [get|post] http://localhost:65369/~ cd People /People [get|post] http://localhost:65369/People~ dir . [get|post] .. [] {id} [get] http://localhost:65369/People~ get HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 Date: Wed, 26 Sep 2018 20:25:37 GMT Server: Kestrel Transfer-Encoding: chunked [ { "id": 1, "name": "Scott Hunter" }, { "id": 0, "name": "Scott Hanselman" }, { "id": 2, "name": "Scott Guthrie" } ]

Take a moment and read that. It can be a little confusing. It's not HTTPie, it's not Curl, but it's also not PostMan. it's something that you run and stay running if you're a command line person and enjoy that space. It's as if you "cd (change directory)" and "mount" a disk into your Web API.

You can use all the HTTP Verbs, and when POSTing you can set a default text editor and it will launch the editor with the JSON written for you! Give it a try!

A few gotchas/known issues:

You'll want to set a default Content-Type Header for your session. I think this should be default. set header Content-Type application/json If the HTTP REPL doesn't automatically detect your Swagger/OpenAPI endpoint, you'll need to set it manually: set base https://yourapi/api/v1/ set swagger https://yourapi/swagger.json I haven't figure out how to get it to use VS Code as its default editor. Likely because "code.exe" isn't a thing. (It uses a batch .cmd file, which the HTTP REPL would need to special case). For now, use an editor that's an EXE and point the HTTP REPL like this: pref set editor.command.default 'c:\notepad2.exe'

I'm really enjoy this idea. I'm curious how you find it and how you'd see it being used. Sound off in the comments.

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

Scripts to remove old .NET Core SDKs

Sep 20, 2018

Description:

That's a lot of .NET Core installations.NET Core is lovely. It's usage is skyrocketing, it's open source, and .NET Core 2.1 has some amazing performance improvements. Just upgrading from 2.0 to 2.1 gave Bing a 34% performance boost.

However, for those of us who are installing multiple .NET Core SDKs side by side have noticed that they add-up if you are installing daily builds or very often. As of 2.x, .NET Core doesn't yet have an "uninstall all" or "uninstall all previews" option. There will be work done in .NET Core 3.0 that will mitigate this cumulative effect when you have lots of installers.

If you're taking dailies and it's time to tidy up, the short answer per Damian Edwards is "Delete them all, then nuke the dotnet folder in program files, then install the latest version."

Here's a PowerShell Script you can run on Windows as admin that will aggressively uninstall .NET Core SDKs.

Note the match at the top. Depending on your goals, you might want to change it to "Microsoft .NET Core SDK 2.1" or just "Microsoft .NET Core SDK 2."

Once it's all removed, then add the latest from https://www.microsoft.com/net/download/archives

A list of .NET Core SDKs

Here's the script, which is an improvement on Andrew's comment here. You can improve it as it's on GitHub here https://github.com/shanselman/RemoveDotNetCoreSDKInstallers. This scripts currently requires you to hit YES as the MSIs elevate. It doesn't work right then you try /passive as a switch. I'm interesting if you can get a "torch all Core SDK installers and install LTS and Current" script working.$app = Get-WmiObject -Class Win32_Product | Where-Object { $_.Name -match "Microsoft .NET Core SDK" } Write-Host $app.Name Write-Host $app.IdentifyingNumber pushd $env:SYSTEMROOT\System32 $app.identifyingnumber |% { Start-Process msiexec -wait -ArgumentList "/x $_" } popd

This PowerShell is Windows-only, of course.

If you're on RHEL, Ubuntu/Debian, there are scripts here to try out https://github.com/dotnet/cli/tree/master/scripts/obtain/uninstall

Let me know if this script works for you.

Sponsor: Copy: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

Azure DevOps Continuous Build/Deploy/Test with ASP.NET Core 2.2 Preview in One Hour

Sep 18, 2018

Description:

Hanselminutes WebsiteI've been doing Continuous Integration and Deployment for well over 13 years. We used a lot of custom scripts and a lovely tool called CruiseControl.NET to check out, build, test, and deploy our code.

However, it's easy to get lulled into complacency. To get lazy. I don't set up Automated Continuous Integration and Deployment for all my little projects. But I should.

I was manually deploying a change to my podcast website this evening via a git deploy to Azure App Service. Pushing to Azure this way via Git uses "Kudu" to actually build the site. However, earlier this week I was also trying to update my site to .NET Core 2.2 which is in preview. Plus I have Unit Tests that aren't getting run during deploy.

So look at it this way. My simple little podcast website with a few tests and the desire to use a preview .NET Core SDK means I've outgrown a basic "git push to prod" for deploy.

I remembered that Azure DevOps (formerly VSTS) is out and offers free unlimited minutes for open source projects. I have no excuse for my sloppy builds and manual deploys. It also has unlimited free private repos, although I'm happy at GitHub and have no reason to move.

It usually takes me 5-10 minutes for a manual build/test/deploy, so I gave myself an hour to see if I could get this same process automated in Azure DevOps. I've never used this before and I wanted to see if I could do it quickly, and if it was intuitive.

Let's review my goals.

My source is in GitHub Build my ASP.NET Core 2.2 Web Site I want to build with .NET Core 2.2 which is currently in Preview. Run my xUnit Unit Tests I have some Selenium Unit Tests that can't run in the cloud (at least, I haven't figured it out yet) so I need them skipped. Deploy the resulting site to product in my Azure App Service

Cool. So I make a project and point Azure DevOps at my GitHub.

Azure DevOps: Source code in GitHub

They have a number of starter templates, so I was pleasantly surprised I didn't need manually build my Build Configuration myself. I'll pick ASP.NET app. I could pick Azure Web App for ASP.NET but I wanted a little more control.

Select a template

Now I've got a basic build pipeline. You can see it will use NuGet, get the packages, build the app, test the assemblies (if there are tests...more on that later) and the publish (zip) the build artifacts.

Build Pipeline

I then clicked Save & Queue...and it failed. Why? It says that I'm targeting .NET Core 2.2 and it doesn't support anything over 2.1. Shoot.

Agent says it doesn't support .NET Core 2.2

Fortunately there's a pipeline element that I can add called ".NET Core Tool Installer" that will get specific versions of the .NET Core SDK.

NOTE: I've emailed the team that ".NET Tool Installer" is the wrong name. A .NET Tool is a totally different thing. This task should be called the ".NET Core SDK Installer." Because it wasn't, it took me a minute to find it and figure out what it does.

I'm using the SDK Agent version 2.22.2.100-preview2-009404 so I put that string into the properties.

Install the .NET Core SDK custom version

At this point it builds, but I get a test error.

There's two problems with the tests. When I look at the logs I can see that the "testadapter.dll" that comes with xunit is mistakenly being pulled into the test runner! Why? Because the "Test Files" spec includes a VERY greedy glob in the form of **\*test*.dll. Perhaps testadapter shouldn't include the word test, but then it wouldn't be well-named.

**\$(BuildConfiguration)\**\*test*.dll
!**\obj\**

My test DLLs are all named with "tests" in the filename so I'll change the glob to "**\$(BuildConfiguration)\**\*tests*.dll" to cast a less-wide net.

Screenshot (45)

I have four Selenium Tests for my ASP.NET Core site but I don't want them to run when the tests are run in a Docker Container or, in this case, in the Cloud. (Until I figure out how)

I use SkippableFacts from XUnit and do this:

public static class AreWe
{
public static bool InDockerOrBuildServer {
get {
string retVal = Environment.GetEnvironmentVariable("DOTNET_RUNNING_IN_CONTAINER");
string retVal2 = Environment.GetEnvironmentVariable("AGENT_NAME");
return (
(String.Compare(retVal, Boolean.TrueString, ignoreCase: true) == 0)
||
(String.IsNullOrWhiteSpace(retVal2) == false));
}
}
}

Don't tease me. I like it. Now I can skip tests that I don't want running.

if (AreWe.InDockerOrBuildServer) return;

Now my tests run and I get a nice series of charts to show that fact.

22 tests, 4 skipped

I have it building and tests running.

I could add the Deployment Step to the Build but Azure DevOps Pipelines includes a better way. I make a Release Pipeline that is separate. It takes Artifacts as input and runs n number of Stages.

Creating a new Release Pipeline

I take the Artifact from the Build (the zipped up binaries) and pass them through the pipeline into the Azure App Service Deploy step.

Screenshot (49)

Here's the deployment in progress.

Manually Triggered Release

Cool! Now that it works and deploys, I can turn on Continuous Integration Build Triggers (via an automatic GitHub webhook) as well as Continuous Deployment triggers.

Continuous Deployment

Azure DevOps even includes badges that I can add to my readme.md so I always know by looking at GitHub if my site builds AND if it has successfully deployed.

4 releases, the final one succeeded

Now I can see each release as it happens and if it's successful or not.

Build Succeeded, Never Deployed

To top it all off, now that I have all this data and these pipelines, I even put together a nice little dashboard in about a minute to show Deployment Status and Test Trends.

My build and deployment dashboard

When I combine the DevOps Dashboard with my main Azure Dashboard I'm amazed at how much information I can get in so little effort. Consider that my podcast (my little business) is a one-person shop.

Azure Dashboard

And now I have a CI/CD pipeline with integrated testing gates that deploys worldwide. Many years ago this would have required a team and a lot of custom code.

Today it took an hour. Awesome.

I check into GitHub, kicks off a build, tests, emails me the results, and deploys the website if everything is cool. Of course, if I had another team member I could put in deployment gates or reviews, etc.

Sponsor: Copy: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

A complete containerized .NET Core Application microservice that is as small as possible

Sep 14, 2018

Description:

OK, maybe not technically a microservice, but that's a hot buzzword these days, right? A few weeks ago I blogged about Improvements on ASP.NET Core deployments on Zeit's now.sh and making small container images. By the end I was able to cut my container size in half.

The trimming I was using is experimental and very aggressive. If you app loads things at runtime - like ASP.NET Razor Pages sometimes does - you may end up getting weird errors at runtime when a Type is missing. Some types may have been trimmed away!

For example:

fail: Microsoft.AspNetCore.Server.Kestrel[13]
Connection id "0HLGQ1DIEF1KV", Request id "0HLGQ1DIEF1KV:00000001": An unhandled exception was thrown by the application.
System.TypeLoadException: Could not load type 'Microsoft.AspNetCore.Diagnostics.IExceptionHandlerPathFeature' from assembly 'Microsoft.Extensions.Primitives, Version=2.1.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.HostFiltering.HostFilteringMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Hosting.Internal.HostingApplication.ProcessRequestAsync(Context context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)

Yikes!

I'm doing a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example. Note how he's doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy's csproj).

I'm adding the Tree Trimming Linker in the Dockerfile, so the trimming happens when the container image is built. I'm using the dotnet command to "dotnet add package ILLink.Tasks. This means I don't need to reference the linker package at development time - it's all at container build time.

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

I did end up hitting this bug in the Linker (it's not Released) but there's an easy workaround. I just need to set the property CrossGenDuringPublish to false in the project file.

If you look at the Advanced Instructions for the Linker you can see that you can "root" types or assemblies. Root means "don't mess with these or stuff that hangs off them." So I just need to exercise my app at runtime and make sure that all the types that my app needs are available, but no unnecessary ones.

I added the Assemblies I wanted to keep (not remove) while trimming/linking to my project file:

<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp2.1</TargetFramework>
<CrossGenDuringPublish>false</CrossGenDuringPublish>
</PropertyGroup>

<ItemGroup>
<LinkerRootAssemblies Include="Microsoft.AspNetCore.Mvc.Razor.Extensions;Microsoft.Extensions.FileProviders.Composite;Microsoft.Extensions.Primitives;Microsoft.AspNetCore.Diagnostics.Abstractions" />
</ItemGroup>

<ItemGroup>
<!-- this can be here, or can be done all at runtime in the Dockerfile -->
<!-- <PackageReference Include="ILLink.Tasks" Version="0.1.5-preview-1841731" /> -->
<PackageReference Include="Microsoft.AspNetCore.App" />
</ItemGroup>

</Project>

My strategy for figuring out which assemblies to "root" and exclude from trimming was literally to just iterate. Build, trim, test, add an assembly by reading the error message, and repeat.

This sample ASP.NET Core app will deploy cleanly on Zeit with the smallest image footprint as possible. https://github.com/shanselman/superzeit

Next I'll try an actual Microservice (as opposed to a complete website, which is what this is) and see how small I can get that. Such fun!

UPDATE: This technique works with "dotnet new webapi" as well and is about 73 megs per "docker images" and it's 34 megs when sent and squished through Zeit's "now" CLI.

Small services!

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.
© 2018 Scott Hanselman. All rights reserved.
     

How do you use System.Drawing in .NET Core?

Sep 12, 2018

Description:

I've been doing .NET image processing since the beginning. In fact I wrote about it over 13 years ago on this blog when I talked about Compositing two images into one from the ASP.NET Server Side and in it I used System.Drawing to do the work. For over a decade folks using System.Drawing were just using it as a thin wrapper over GDI (Graphics Device Interface) which were very old Win32 (Windows) unmanaged drawing APIs. We use them because they work fine.

.NET Conf: Join us this week! September 12-14, 2018 for .NET Conf! It's a FREE, 3 day virtual developer event co-organized by the .NET Community and Microsoft. Watch all the sessions here. Join a virtual attendee party after the last session ends on Day 1 where you can win prizes! Check out the schedule here and attend a local event in your area organized by .NET community influencers all over the world.

DotNetBotFor a while there was a package called CoreCompat.System.Drawing that was a .NET Core port of a Mono version of System.Drawing.

However, since then Microsoft has released System.Drawing.Common to provide access to GDI+ graphics functionality cross-platform.

There is a lot of existing code - mine included - that makes assumptions that .NET would only ever run on Windows. Using System.Drawing was one of those things. The "Windows Compatibility Pack" is a package meant for developers that need to port existing .NET Framework code to .NET Core. Some of the APIs remain Windows only but others will allow you to take existing code and make it cross-platform with a minimum of trouble.

Here's a super simple app that resizes a PNG to 128x128. However, it's a .NET Core app and it runs in both Windows and Linux (Ubuntu!)

using System;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.IO;

namespace imageresize
{
class Program
{
static void Main(string[] args)
{
int width = 128;
int height = 128;
var file = args[0];
Console.WriteLine($"Loading {file}");
using(FileStream pngStream = new FileStream(args[0],FileMode.Open, FileAccess.Read))
using(var image = new Bitmap(pngStream))
{
var resized = new Bitmap(width, height);
using (var graphics = Graphics.FromImage(resized))
{
graphics.CompositingQuality = CompositingQuality.HighSpeed;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.DrawImage(image, 0, 0, width, height);
resized.Save($"resized-{file}", ImageFormat.Png);
Console.WriteLine($"Saving resized-{file} thumbnail");
}
}
}
}
}

Here it is running on Ubuntu:

Resizing Images on Ubuntu

NOTE that on Ubuntu (and other Linuxes) you may need to install some native dependencies as System.Drawing sits on top of native libraries

sudo apt install libc6-dev
sudo apt install libgdiplus

There's lots of great options for image processing on .NET Core now! It's important to understand that this System.Drawing layer is great for existing System.Drawing code, but you probably shouldn't write NEW image management code with it. Instead, consider one of the great other open source options.

ImageSharp - A cross-platform library for the processing of image files; written in C# Compared to System.Drawing ImageSharp has been able to develop something much more flexible, easier to code against, and much, much less prone to memory leaks. Gone are system-wide process-locks; ImageSharp images are thread-safe and fully supported in web environments.

Here's how you'd resize something with ImageSharp:

using (Image<Rgba32> image = Image.Load("foo.jpg"))
{
image.Mutate(x => x
.Resize(image.Width / 2, image.Height / 2)
.Grayscale());
image.Save("bar.jpg"); // Automatic encoder selected based on extension.
} Magick.NET -A .NET library on top of ImageMagick SkiaSharp - A .NET wrapper on top of Google's cross-platform Skia library

It's awesome that there are so many choices with .NET Core now!

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

The Extremely Promising State of Diabetes Technology in 2018

Sep 7, 2018

Description:

This blog post is an update to these two Diabetes Technology blog posts:

The Promising State of Diabetes Technology in 2016 The Sad State of Diabetes Technology in 2012

You might also enjoy this video of the talk I gave at WebStock 2018 on Solving Diabetes with an Open Source Artificial Pancreas*.

First, let me tell you that insulin is too expensive in the US.

Between 2002 and 2013, the price of insulin jumped, with the typical cost for patients increasing from about $40 a vial to $130.

Open Source Artificial Pancreas on iPhoneFor some of the newer insulins like the ones I use, I pay as much as $296 a bottle. I have a Health Savings Plan so this is often out of pocket until I hit the limit for the year.

People in America are rationing insulin. This is demonstrable fact. I've personally mailed extra insulin to folks in need. I've meet young people who lost their insurance at age 26 and have had to skip shots to save vials of insulin.

This is a problem, but on the technology side there's some extremely promising work happening, and it's we have really hit our stride in the last ten years.

I wrote the first Glucose Management system for the PalmPilot in 1998 called GlucoPilot and provided on the go in-depth analysis for the first time. The first thing that struck me was that the PalmPilot and the Blood Sugar Meter were the same size. Why did I need two devices with batteries, screens, buttons and a CPU? Why so many devices?

I've been told every year the a Diabetes Breakthrough is coming "in five years." It's been 25 years.

In 2001 I went on a trip across the country with my wife, an insulin pump and 8 PDAs (personal digital assistants, the "iPhones" of the time) and tried to manage my diabetes using all the latest wireless technology...this was the latest stuff 17 years ago. I had just moved from injections to an insulin pump. Even now in 2018 Insulin Pumps are expensive, mostly proprietary, and super expensive. In fact, many folks use insulin pumps in the states use out of warranty pumps purchased on Craigslist.

Fast forward to 2018 and I've been using an Open Source Artificial Pancreas for two years.

OpenAPS - Open Artificial Pancreas System. A platform for building a closed-loop with open tools. AndroidAPS - A branch of OpenAPS running on Android Loop/LoopKit - Open Source Artificial Pancreas running on the iPhone with a hardware bridge (RileyLink) to the pump. I run this pancreas, personally, and have for nearly 2 years. Watch Dana Lewis (the originator of OpenAPS) talk about OpenAPS at OSCON!

The results speak for themselves. While I do have bad sugars sometimes, and I do struggle, if you look at my blood work my HA1c (the long term measurement of "how I'm doing" shows non-diabetic levels. To be clear - I'm fully and completely Type 1 diabetic, I produce zero insulin of my own. I take between 40 and 50 Units of insulin every day, and have for the last 25 years...but I will likely die of old age.

Open Source Artificial Pancreas === Diabetes results pic.twitter.com/ZSsApTLRXq

— Scott Hanselman (@shanselman) September 10, 2018

This is significant. Why? Because historically diabetics die of diabetes. While we wait (or more accurately, #WeAreNotWaiting) for a biological/medical solution to Type 1 diabetes, the DIY (Do It Yourself) community is just doing it ourselves.

Building on open hardware, open software, and reverse-engineered protocols for proprietary hardware, the online diabetes community literally has their choice of open source pancreases in 2018! Who would have imagined it. You can choose your algorithms, your phone, your pump, your continuous glucose meter.

Today, in 2018, you can literally change the code and recompile a personal branch of your own pancreas.

Watch my 2010 YouTube video "I am Diabetic" as I walk you through the medical hardware (pumps, needles, tubes, wires) in managing diabetes day to day. Then watch my 2018 talk on Solving Diabetes with an Open Source Artificial Pancreas*.

I believe that every diabetic should be offered a pump, a continuous glucose meter, and trained on some kind of artificial pancreas. A cloud based reporting system has also been a joy. My wife and family can see my sugar in real time when I'm away. My wife has even called me overseas to wake me up when I was in a bad sugar situation.

Artificial Pancreas generations

As the closed-hardware and closed-software medical companies work towards their own artificial pancreases, the open source community feel those companies would better serve us by opening up their protocols, using standard Bluetooth ISO profiles and use security best practices.

Looking at the table above, the open source community is squarely in #4 and moving quickly into #5. But why did we have to do this ourselves? We got tired of waiting.

All in all, through open software and hardware, I can tell you that my life is SO MUCH BETTER than it was when I was first diagnosed. I figure we'll have this all figured out in about five years, right? ;)

THANK YOU!

MORE DIABETES READING Bridging Dexcom Share CGM Receivers and Nightscout Hacking Diabetes Visualizing your real-time blood sugar values in Git Diabetes Technology: Dexcom G5 CGM Review Introducing Web Tiles for Microsoft Band - My diabetes data on a Band! Diabetics: It's fun to say Bionic Pancreas but how about a reality check It's WAY too early to call this Insulin Pump an Artificial Pancreas

* Yes there are some analogies, stretched metaphors, and oversimplifications in this talk. This talk is an introduction to the space to the normally-sugared. If you are a diabetes expert you might watch and say...eh...ya, I mean, it kind of works like that. Please take the talk in in the thoughtful spirit it was intended.

Sponsor: Get home early, eat supper on time and coach your kids in soccer. Moving workloads to Azure just got easy with Azure NetApp Files. Sign up to Preview Azure NetApp Files!


© 2018 Scott Hanselman. All rights reserved.
     

Always Be Closing...Pull Requests

Sep 5, 2018

Description:

Always be closingI was looking at a Well Known Open Source Project on GitHub today. It had like 978 Pull Requests. A "PR" means "hey here's some code I did for your project, you can PULL it from here and merge it into your code!"

But these were Open Pull Requests. Pending. Limbo Pull Requests. Dating back to 2015.

Why do Pull Requests stay open?

Why do projects keep Pull Requests open? What's a reasonable amount of time? Here's a few thoughts.

PR as Call to Action PRs are a shout. They are HERE IS SOME CODE and they create work for the maintainer. They are needy things and require review and merging, but even worse, sometimes manual merging. Plus for folks new to Git and Open Source, asking them to "rebase on top of latest" may be enough for them to just give up. Fear of Closing If you close a PR without merging it, it's a rejection. It's a statement that this work isn't going to be used, and there's always a chance that the person who did the work will feel pretty bad about it. Abandoned Sometimes the originator of the PR disappears. The PR is effectively abandoned. These should be closed after a time. Opened so long they can't be merged The problem with PRs that are open for long is that they become impossible to merge. The cost of understanding whether they are still relevant plus resolving the merge conflicts might be higher than the value of the PR itself. Incorrectly created A PR originator may intent to change a single word (misspelling) but their PR changes CRs to LFs or Tabs to Spaces, it's a hassle. Formatting It's generally considered poor form to send a PR out of the blue where one just ran a linter or formatter. If the project wanted that done they'd ask for it. Totally not aligned with Roadmap If a PR shows up without context or communication, it may not be aligned with the direction of the project. Surprise PR Unfortunately some PRs show up out of the blue with major changes, file moves, or no context. If a PR wasn't asked for, or if a PR wasn't requested, or borne of an Issue, you'll likely have trouble pushing it through.

Thanks to Jon and Immo for their thoughts on this (likely incomplete) list. Jess Frazelle has a great post on "The Art of Closing" that I just found, and it includes a glorious gif from Glengarry Glen Ross where Always Be Closing comes from (warning, clip has dated and offensive language).

Jess suggests a few ways to Always Be Closing.

Two things that can help make your open source project successful AND stay tidy!

including a CONTRIBUTING.md GitHub has some good guidance https://help.github.com/articles/setting-up-your-project-for-healthy-contributions/ and most of the dotnet repos have some decent contribution guidelines. I LOVE Gina Häußge's Contributing.md on her open source project "OctoPrint." using Pull Request templates that give clear guidance on how to submit a successful pull request. https://help.github.com/articles/creating-a-pull-request-template-for-your-repository Use bots to test and build PRs, sign CLAs (Contributor License Agreements) and move the ball forward.

What do you think? Why do PRs stay open?

Sponsor: Get home early, eat supper on time and coach your kids in soccer. Moving workloads to Azure just got easy with Azure NetApp Files. Sign up to Preview Azure NetApp Files!


© 2018 Scott Hanselman. All rights reserved.
     

Interesting bugs - MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible. Image is too small.

Aug 30, 2018

Description:

I got a very strange warning recently when building a .NET Core app with "dotnet build."

MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible.
Image is too small.

Interesting Bug used under CC https://flic.kr/p/4SpmL6Eek! It's clear, in that something is "too small" but what? A file I guess? Maybe it's the wrong size?

The error code is MSB3246 which is nice and googleable/searchable but it was confusing because I couldn't figure our what file specifically. It just felt vague.

BUT!

I had recently been overclocking my machine (overly aggressively, gulp, about 40%) and had a very nasty hard reboot. As a result I had a few dozen files get orphaned - specifically the files were zero'ed out! Zero is small, right?

Turns out you can pass parameters over to MSBuild from "dotnet build" and see what MSBuild is doing internally. For example, you could

/fileLoggerParameters:verbosity=diagnostic

but that's long. So how about:

dotnet build /flp:v=diag

Cool. What deep logging do I see now?

Primary reference "deliberately.zero.bytes.dll". (TaskId:41)
13:36:52.397 1:7>C:\Program Files\dotnet\sdk\2.1.400\Microsoft.Common.CurrentVersion.targets(2110,5): warning MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible. Image is too small. [S:\work\zero-byte-ref\zero-byte-ref.csproj]
Resolved file path is "S:\work\zero-byte-ref\deliberately.zero.bytes.dll". (TaskId:41)
Reference found at search path location "{RawFileName}". (TaskId:41)

Now with "verbose" turned on I can see that one of the references is zero'ed out/corrupted/bad. I reinstalled .NET Core in my case and doubled checked all the DLLs/Assemblies that I was bringing in - I also ran chkdsk /f - and I was back in business!

I hope this helps someone who might stumble on error MSB3246 and wonder what's up.

Even better, thanks to Rainer Sigwald who filed a bug against MSBuild to update the error message to be more clear. In the future I'll be able to debug this without changing verbosity!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.
© 2018 Scott Hanselman. All rights reserved.
     

Improvements on ASP.NET Core deployments on Zeit's now.sh and making small container images

Aug 29, 2018

Description:

Back in March of 2017 I blogged about Zeit and their cool deployment system "now." Zeit will take any folder and deploy it to the web easily. Better yet if you have a Dockerfile in that folder as Zeit will just use that for the deployment.

image

Zeit's free Open Source account has a limit of 100 megs for the resulting image, and with the right Dockerfile started ASP.NET Core apps are less than 77 megs. You just need to be smart about a few things. Additionally, it's running in a somewhat constrained environment so ASP.NET's assumptions around FileWatchers can occasionally cause you to see errors like

at System.IO.FileSystemWatcher.StartRaisingEvents()
Unhandled Exception: System.IO.IOException:
The configured user limit (8192) on the number of inotify instances has been reached.
at System.IO.FileSystemWatcher.StartRaisingEventsIfNotDisposed(

While this environment variable is set by default for the "FROM microsoft/dotnet:2.1-sdk" Dockerfile, it's not set at runtime. That's dependent on your environment.

Here's my Dockerfile for a simple project called SuperZeit. Note that the project is structured with a SLN file, which I recommend.

Let me call our a few things.

First, we're doing a Multi-stage build here. The SDK is large. You don't want to deploy the compiler to your runtime image! Second, the first copy commands just copy the sln and the csproj. You don't need the source code to do a dotnet restore! (Did you know that?) Not deploying source means that your docker builds will be MUCH faster as Docker will cache the steps and only regenerate things that change. Docker will only run dotnet restore again if the solution or project files change. Not the source. Third, we are using the aspnetcore-runtime image here. Not the dotnetcore one. That means this image includes the binaries for .NET Core and ASP.NET Core. We don't need or want to include them again. If you were doing a publish with a the -r switch, you'd be doing a self-contained build/publish. You'd end up copying TWO .NET Core runtimes into a container! That'll cost you another 50-60 megs and it's just wasteful. If you want to do that Go explore the very good examples on the .NET Docker Repro on GitHub https://github.com/dotnet/dotnet-docker/tree/master/samples Optimizing Container Size .NET Core Alpine Docker Sample - This sample builds, tests, and runs an application using Alpine. .NET Core self-contained Sample - This sample builds and runs an application as a self-contained application. Finally, since some container systems like Zeit have modest settings for inotify instances (to avoid abuse, plus most folks don't use them as often as .NET Core does) you'll want to set ENV DOTNET_USE_POLLING_FILE_WATCHER=true which I do in the runtime image.

So starting from this Dockerfile:

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
RUN dotnet publish -c Release -o out

FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Remember the layers of the Docker images, as if they were a call stack:

Your app's files ASP.NET Core Runtime .NET Core Runtime .NET Core native dependencies (OS specific) OS image (Alpine, Ubuntu, etc)

For my little app I end up with a 76.8 meg image. If want I can add the experimental .NET IL Trimmer. It won't make a difference with this app as it's already pretty simple but it could with a larger one.

BUT! What if we changed the layering to this?

Your app's files along with a self-contained copy of ASP.NET Core and .NET Core .NET Core native dependencies (OS specific) OS image (Alpine, Ubuntu, etc)

Then we could do a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example.

See how he's doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy's csproj).

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Now at this point, I'd want to see how small the IL Linker made my ultimate project. The goal is to be less than 75 megs. However, I think I've hit this bug so I will have to head to bed and check on it in the morning.

The project is at https://github.com/shanselman/superzeit and you can just clone and "docker build" and see the bug.

However, if you check the comments in the Docker file and just use the a "FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime" it works fine. I just think I can get it even smaller than 75 megs.

Talk so you soon, Dear Reader! (I'll update this post when I find out about that bug...or perhaps my bug!)

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Decoding an SSH Key from PEM to BASE64 to HEX to ASN.1 to prime decimal numbers

Aug 24, 2018

Description:

I'm reading a new chapter of The Imposter's Handbook: Season 2 that Rob and I are working on. He's digging into the internals of what's exactly in your SSH Key.

Decoding a certificate

I generated a key with no password:

ssh-keygen -t rsa -C scott@myemail.com

Inside the generated file is this text, that we've all seen before but few have cracked open.

-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAtd8As85sOUjjkjV12ujMIZmhyegXkcmGaTWk319vQB3+cpIh
Wu0mBke8R28jRym9kLQj2RjaO1AdSxsLy4hR2HynY7l6BSbIUrAam/aC/eVzJmg7
qjVijPKRTj7bdG5dYNZYSEiL98t/+XVxoJcXXOEY83c5WcCnyoFv58MG4TGeHi/0
coXKpdGlAqtQUqbp2sG7WCrXIGJJdBvUDIQDQQ0Isn6MK4nKBA10ucJmV+ok7DEP
kyGk03KgAx+Vien9ELvo7P0AN75Nm1W9FiP6gfoNvUXDApKF7du1FTn4r3peLzzj
50y5GcifWYfoRYi7OPhxI4cFYOWleFm1pIS4PwIDAQABAoIBAQCBleuCMkqaZnz/
6GeZGtaX+kd0/ZINpnHG9RoMrosuPDDYoZZymxbE0sgsfdu9ENipCjGgtjyIloTI
xvSYiQEIJ4l9XOK8WO3TPPc4uWSMU7jAXPRmSrN1ikBOaCslwp12KkOs/UP9w1nj
/PKBYiabXyfQEdsjQEpN1/xMPoHgYa5bWHm5tw7aFn6bnUSm1ZPzMquvZEkdXoZx
c5h5P20BvcVz+OJkCLH3SRR6AF7TZYmBEsBB0XvVysOkrIvdudccVqUDrpjzUBc3
L8ktW3FzE+teP7vxi6x/nFuFh6kiCDyoLBhRlBJI/c/PzgTYwWhD/RRxkLuevzH7
TU8JFQ9BAoGBAOIrQKwiAHNw4wnmiinGTu8IW2k32LgI900oYu3ty8jLGL6q1IhE
qjVMjlbJhae58mAMx1Qr8IuHTPSmwedNjPCaVyvjs5QbrZyCVVjx2BAT+wd8pl10
NBXSFQTMbg6rVggKI3tHSE1NSdO8kLjITUiAAjxnnJwIEgPK+ljgmGETAoGBAM3c
ANd/1unn7oOtlfGAUHb642kNgXxH7U+gW3hytWMcYXTeqnZ56a3kNxTMjdVyThlO
qGXmBR845q5j3VlFJc4EubpkXEGDTTPBSmv21YyU0zf5xlSp6fYe+Ru5+hqlRO4n
rsluyMvztDXOiYO/VgVEUEnLGydBb1LwLB+MVR2lAoGAdH7s7/0PmGbUOzxJfF0O
OWdnllnSwnCz2UVtN7rd1c5vL37UvGAKACwvwRpKQuuvobPTVFLRszz88aOXiynR
5/jH3+6IiEh9c3lattbTgOyZx/B3zPlW/spYU0FtixbL2JZIUm6UGmUuGucs8FEU
Jbzx6eVAsMojZVq++tqtAosCgYB0KWHcOIoYQUTozuneda5yBQ6P+AwKCjhSB0W2
SNwryhcAMKl140NGWZHvTaH3QOHrC+SgY1Sekqgw3a9IsWkswKPhFsKsQSAuRTLu
i0Fja5NocaxFl/+qXz3oNGB56qpjzManabkqxSD6f8o/KpeqryqzCUYQN69O2LG9
N53L9QKBgQCZd0K6RFhhdJW+Eh7/aIk8m8Cho4Im5vFOFrn99e4HKYF5BJnoQp4p
1QTLMs2C3hQXdJ49LTLp0xr77zPxNWUpoN4XBwqDWL0t0MYkRZFoCAG7Jy2Pgegv
uOuIr6NHfdgGBgOTeucG+mPtADsLYurEQuUlfkl5hR7LgwF+3q8bHQ==
-----END RSA PRIVATE KEY-----

The private key is an ASN.1 (Abstract Syntax Notation One) encoded data structure. It's a funky format but it's basically a packed format with the ability for nested trees that can hold booleans, integers, etc.

However, ASN.1 is just the binary packed "payload." It's not the "container." For example, there are envelopes and there are letters inside them. The envelope is the PEM (Privacy Enhanced Mail) format. Such things start with ----- BEGIN SOMETHING ----- and end with ----- END SOMETHING ------. If you're familiar with BASE64, your spidey sense may tell you that this is a BASE64 encoded file. Not everything that's BASE64 turns into a friendly ASCII string. This turns into a bunch of bytes you can view in HEX.

We can first decode the PEM file into HEX. Yes, I know there's lots of ways to do this stuff at the command line, but I like showing and teaching using some of the many encoding/decoding websites and utilities there are out there. I also love using https://cryptii.com/ for these things as you can build a visual pipeline.

308204A40201000282010100B5DF00B3CE6C3948E3923575DAE8
CC2199A1C9E81791C9866935A4DF5F6F401DFE7292215AED2606
47BC476F234729BD90B423D918DA3B501D4B1B0BCB8851D87CA7
63B97A0526C852B01A9BF682FDE57326683BAA35628CF2914E3E
DB746E5D60D65848488BF7CB7FF97571A097175CE118F3773959
C0A7CA816FE7C306E1319E1E2FF47285CAA5D1A502AB5052A6E9
DAC1BB582AD7206249741BD40C8403410D08B27E8C2B89CA040D
74B9C26657EA24EC310F9321A4D372A0031F9589E9FD10BBE8EC
FD0037BE4D9B55BD1623FA81FA0DBD45C3029285EDDBB51539F8
AF7A5E2F3CE3E74CB919C89F5987E84588BB38F87123870560E5
snip

This ASN.1 JavaScript decoder can take the HEX and parse it for you. Or you can that ASN.1 packed format at the *nix command line and see that there's nine big integers inside (I trimmed them for this blog).

openssl asn1parse -in notreal
0:d=0 hl=4 l=1188 cons: SEQUENCE
4:d=1 hl=2 l= 1 prim: INTEGER :00
7:d=1 hl=4 l= 257 prim: INTEGER :B5DF00B3CE6C3948E3923575DAE8CC2199A1C9E81791C9866935A4DF5F6F401DFE7292215
268:d=1 hl=2 l= 3 prim: INTEGER :010001
273:d=1 hl=4 l= 257 prim: INTEGER :8195EB82324A9A667CFFE867991AD697FA4774FD920DA671C6F51A0CAE8B2E3C30D8A1967
534:d=1 hl=3 l= 129 prim: INTEGER :E22B40AC22007370E309E68A29C64EEF085B6937D8B808F74D2862EDEDCBC8CB18BEAAD48
666:d=1 hl=3 l= 129 prim: INTEGER :CDDC00D77FD6E9E7EE83AD95F1805076FAE3690D817C47ED4FA05B7872B5631C6174DEAA7
798:d=1 hl=3 l= 128 prim: INTEGER :747EECEFFD0F9866D43B3C497C5D0E3967679659D2C270B3D9456D37BADDD5CE6F2F7ED4B
929:d=1 hl=3 l= 128 prim: INTEGER :742961DC388A184144E8CEE9DE75AE72050E8FF80C0A0A38520745B648DC2BCA170030A97
1060:d=1 hl=3 l= 129 prim: INTEGER :997742BA4458617495BE121EFF68893C9BC0A1A38226E6F14E16B9FDF5EE072981790499E

Per the spec the format is this:

An RSA private key shall have ASN.1 type RSAPrivateKey:

RSAPrivateKey ::= SEQUENCE {
version Version,
modulus INTEGER, -- n
publicExponent INTEGER, -- e
privateExponent INTEGER, -- d
prime1 INTEGER, -- p
prime2 INTEGER, -- q
exponent1 INTEGER, -- d mod (p-1)
exponent2 INTEGER, -- d mod (q-1)
coefficient INTEGER -- (inverse of q) mod p }

I found the description for how RSA works in this blog post very helpful as it uses small numbers as examples. The variable names here like p, q, and n are agreed upon and standard.

The fields of type RSAPrivateKey have the following meanings:
o version is the version number, for compatibility
with future revisions of this document. It shall
be 0 for this version of the document.
o modulus is the modulus n.
o publicExponent is the public exponent e.
o privateExponent is the private exponent d.
o prime1 is the prime factor p of n.
o prime2 is the prime factor q of n.
o exponent1 is d mod (p-1).
o exponent2 is d mod (q-1).
o coefficient is the Chinese Remainder Theorem
coefficient q-1 mod p.

Let's look at that first number q, the prime factor p of n. It's super long in Hexadecimal.

747EECEFFD0F9866D43B3C497C5D0E3967679659D2C270B3D945
6D37BADDD5CE6F2F7ED4BC600A002C2FC11A4A42EBAFA1B3D354
52D1B33CFCF1A3978B29D1E7F8C7DFEE8888487D73795AB6D6D3
80EC99C7F077CCF956FECA5853416D8B16CBD89648526E941A65
2E1AE72CF0511425BCF1E9E540B0CA23655ABEFADAAD028B

That hexadecimal number converted to decimal is this long ass number. It's 308 digits long!

22959099950256034890559187556292927784453557983859951626187028542267181746291385208056952622270636003785108992159340113537813968453561739504619062411001131648757071588488220532539782545200321908111599592636973146194058056564924259042296638315976224316360033845852318938823607436658351875086984433074463158236223344828240703648004620467488645622229309082546037826549150096614213390716798147672946512459617730148266423496997160777227482475009932013242738610000405747911162773880928277363924192388244705316312909258695267385559719781821111399096063487484121831441128099512811105145553708218511125708027791532622990325823

It's hard work to prove this number is prime but there's a great Integer Factorization Calculator that actually uses WebAssembly and your own local CPU to check such things. Expect to way a long time, sometimes until the sun death of the universe. ;)

Rob and I are finding it really cool to dig just below the surface of common things we look at all the time. I have often opened a key file in a text file but never drawn a straight and complete line through decoding, unpacking, decoding, all the way to a math mathematical formula. I feel I'm filling up some major gaps in my knowledge!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

How do you even know this crap?

Aug 22, 2018

Description:

Imposter's HandbookThis post won't be well organized so lower your expectations first. When Rob Conery first wrote "The Imposter's Handbook" I was LOVING IT. It's a fantastic book written for imposters by an imposter. Remember, I'm the original phony.

Now he's working on The Imposter's Handbook: Season 2 and I'm helping. The book is currently in Presale and we're releasing PDFs every 2 to 3 weeks. Some of the ideas from the book will come from blog posts like or similar to this one. Since we are using Continuous Delivery and an Iterative Process to ship the book, some of the blog posts (like this one) won't be fully baked until they show up in the book (or not). See how I equivocated there? ;)

The next "Season" of The Imposter's Handbook is all about the flow of information. Information flowing through encoding, encryption, and transmission over a network. I'm also interested in the flow of information through one's brain as they move through the various phases of being a developer. Bear with me (and help me in the comments!).

I was recently on a call with two other developers, and it would be fair that we were of varied skill levels. We were doing some HTML and CSS work that I would say I'm competent at, but by no means an expert. Since our skill levels didn't fall on a single axis, we'd really we'd need some Dungeons & Dragon's Cards to express our competencies.

D&D Cards from Battle Grip

I might be HTML 8, CSS 6, Computer Science 9, Obscure Trivia 11, for example.

We were asked to make a little banner with some text that could be later closed with some iconography that would represent close/dismiss/go away.

One engineer suggested "Here's some text + ICON.PNG" The next offered a more scalable option with "Here's some text + ICON.SVG"

Both are fine ideas that would work, while perhaps later having DPI or maintenance issues, but truly, perfectly cromulent ideas.

I have never been given this task, I am not a designer, and I am a mediocre front-end person. I asked what they wanted it to look like and they said "maybe a square with an X or a circle with an X or a circle with a line."

I offered, you know, there MUST be a Unicode Glyph for that. I mean, there's one for poop." Apparently I say poop in business meetings more than any other middle manager at the company, but that's fodder for another blog post.

We searched and lo and behold we found ☒ and ⊝ and simply added them to the end of the string. They scale visibly, require no downloads or extra dependencies, and can be colored and styled nicely because they are text.

One of the engineers said "how do you even know this crap?" I smiled and shrugged and we moved on to the doing.

To be clear, this post isn't self-congratulatory. Perhaps you had the same idea. This interaction was all of 10 minutes long. But I'm interested in the HOW did I know this? Note that I didn't actually KNOW that these glyphs existed. I knew only that they SHOULD exist. They MUST.

How many times have you been coding and said "You know, there really must be a function/site/tool that does x/y/z?" All the time, right? You don't know the answers but you know someone must have AND must have solved it in a specific way such that you could find it. A new developer doesn't have this intuition - this sense of technical smell - yet.

How is technical gut and intuition and smell developed? Certainly by doing, by osmosis, by time, by sleeping, and waking, and doing it again.

I think it's exposure. It's exposure to a diverse set of technical problems that all build on a solid base of fundamentals.

Rob and I are going to try to expand on how this technical confidence gets developed in The Imposter's Handbook: Season 2 as topics like Logic, Binary and Logical Circuits, Compression and Encoding, Encryption and Cryptanalysis, and Networking and Protocols are discussed. But I want to also understand how/if/when these topics and examples excite the reader...and most importantly do they provide the reader with that missing Tetris Piece of Knowledge that moves you from a journeyperson developer to someone who can more confidently wear the label Computer Science 9, Obscure Trivia 11.

via GIPHY

What do you think? Sound off in the comments and help me and Rob understand!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Upgrading an existing .NET project files to the lean new CSPROJ format from .NET Core

Aug 17, 2018

Description:

Evocative random source code photoIf you've looked at csproj (C# (csharp) projects) in the past in a text editor you probably looked away quickly. They are effectively MSBuild files that orchestrate the build process. Phrased differently, a csproj file is an instance of an MSBuild file.

In Visual Studio 2017 and .NET Core 2 (and beyond) the csproj format is MUCH MUCH leaner. There's a lot of smart defaults, support for "globbing" like **/*.cs, etc and you don't need to state a bunch of obvious stuff. Truly you can take earlier msbuild/csproj files and get them down to a dozen lines of XML, plus package references. PackageReferences (references to NuGet packages) should be moved out of packages.config and into the csproj.  This lets you manage all project dependencies in one place and gives you and uncluttered view of top-level dependencies.

However, upgrading isn't as simple as "open the old project file and have VS automatically migrate you."

You have some options when migrating to .NET Core and the .NET Standard.

First, and above all, run the .NET Portability Analyzer and find out how much of your code is portable. Then you have two choices.

Great a new project file with something like "dotnet new classlib" and then manually get your projects building from the top (most common ancestor) project down Try to use an open source 3rd party migration tool

Damian on my team recommends option one - a fresh project - as you'll learn more and avoid bringing cruft over. I agree, until there's dozens of projects, then I recommend trying a migration tool AND then comparing it to a fresh project file to avoid adding cruft. Every project/solution is different, so expect to spend some time on this.

The best way to learn this might be by watching it happen for real. Wade from Salesforce was tasked with upgrading his 4+ year old .NET Framework (Windows) based SDK to portable and open source .NET Core. He had some experience building for older versions of Mono and was thoughtful about not calling Windows-specific APIs so he knows the code is portable. However he needs to migrate the project files and structure AND get the Unit Tests running with "dotnet test" and the command line.

I figured I'd give him a head start by actually doing part of the work. It's useful to do this because, frankly, things go wrong and it's not pretty!

I started with Hans van Bakel's excellent CsProjToVS2017 global tool. It does an excellent job of getting your project 85% of the way there. To be clear, don't assume anything and not every warning will apply to you. You WILL need to go over every line of your project files, but it is an extraordinarily useful tool. If you have .NET Core 2.1, install it globally like this:

dotnet tool install Project2015To2017.Cli --global

Then its called (unfortunately) with another command "csproj-to-2017" and you can pass in a solution or an individual csproj.

After you've done the administrivia of the actual project conversion, you'll also want to make educated decisions about the 3rd party libraries you pull in. For example, if you want to make your project cross-platform BUT you depend on some library that is Windows only, why bother trying to port? Well, many of your favorite libraries DO have "netstandard" or ".NET Standard" versions. You'll see in the video below how I pull Wade's project's reference forward with a new version of JSON.NET and a new NuUnit. By the end we are building and the command line and running tests as well with code coverage.

Please head over to my YouTube and check it out. Note this happened live and spontaneously plus I had a YouTube audience giving me helpful comments, so I'll address them occasionally.

LIVE: Upgrading an older .NET SDK to .NET Core and .NET Standard

If you find things like this useful, let me know in the comments and maybe I'll do more of them. Also, do you think things like this belong on the Visual Studio Twitch Channel? Go follow my favs on Twitch CSharpFritz and Noopkat for more live coding fun!

Friend of the Blog: Want to learn more about .NET for free? Join us at DotNetConf! It's a free virtual online community conference September 12-14, 2018. Head over to https://www.dotnetconf.net to learn more and for a Save The Date Calendar Link.


© 2018 Scott Hanselman. All rights reserved.
     

Azure Application Insights warned me of failed dependent requests on my site

Aug 15, 2018

Description:

I've been loving Application Insights ever since I hooked it up to my Podcast Site. Application Insights is stupid cheap and provides an unreal number of insights into what's going on in your site. I hooked it up and now I have a nice dashboard showing what's up. It's pretty healthy.

Lovely graphics showing HEALTHY websites

Here's an interesting view that shows the Availability Test that's checking my site as well as outbound calls (there isn't a lot as I cache aggressively) to SimpleCast where I host my shows.

A chart showing 100% availability

Availability is important, of course, so I set up some tests from a number of locations. I don't want the site to be down in Brazil but up in France, for example.

However, I got an email a week ago that said my site had a sudden rise in failures. Here's the thing, though. When I set up a web test I naively thought I was setting up a "ping." You know, a knock on the door. I figured if the WHOLE SITE was down, they'd tell me.

Here's my availability for today, along with timing from a bunch of locations world wide.

A nice green availability chart

Check out this email. The site is fine; that is, the primary requests didn't fail. But dependent request did fail! Application Insights noticed that an image referenced on the home page was suddenly a 404! Why suddenly? Because I put the wrong date and time for an episode and it auto-published before I had the guest's headshot!

I wouldn't have noticed this missing image until a user emailed me, so I was impressed that Application Insights gave me the heads up.

1 dependant request failed

Here is the chart for that afternoon when I published a bad show. Note that the site is technically up (it was) but a dependent request (a request after the main GET) failed.

Some red shows my site isn't very available

This is a client side failure, right? An image didn't load and it notified me. Cool. I can (and do) also instrument the back end code. Here you can see someone keeps sending me a PUT request, perhaps trying to poke at my site. By the way, random PUT person has been doing this for months.

I can also see slowest requests and dig as deep as I want. In fact I did a whole video on digging into Azure Application Insights that's up on YouTube.

A rogue PUT request

I've been using Application Insights for maybe a year or two now. Its depth continues to astound me. I KNOW I'm not using it to its fullest and I love that I'm still surprised by it.

Friend of the Blog: Want to learn more about .NET for free? Join us at DotNetConf! It's a free virtual online community conference September 12-14, 2018. Head over to https://www.dotnetconf.net to learn more and for a Save The Date Calendar Link.


© 2018 Scott Hanselman. All rights reserved.
     

Building the Ultimate Developer PC 3.0 - The Parts List for my new computer, IronHeart

Aug 10, 2018

Description:

Ironheart is my new i9 PCIt's been 7 years since the last time I built "The Ultimate Developer PC 2.0," and over 11 since the original Ultimate Developer PC that Jeff Atwood built with for me. That last PC was $3000 and well, frankly, that's a heck of a lot of money. Now, I see a lot of you dropping $2k and $3k on MacBook Pros and Surfaces without apparently sweating it too much but I expect that much money to last a LONG TIME.

Do note that while my job does give me a laptop for work purposes every 3 years, my desktop is my own, paid for with my own money and not subsidized by my employer in any way. This PC is mine.

I wrote about money and The Programmer's Priorities in my post on Brain, Bytes, Back, and Buns. As Developer we spend a lot of time looking at monitors, sitting in chairs, using computers, and sleeping. It stands to reason we should should invest in good chairs, good monitors and PCs, and good beds. That also means good mice and keyboards, of course.

Was that US$3000 investment worth it? Absolutely. I worked on my PC2.0 nearly every day for 7 years. That's ~2500 days at about $1.25 a day if you consider upgradability.

Continuous PC Improvement via reasonably priced upgrades

How could I use the same PC for 7 years? Because it's modular.

Hard Drive - I upgraded 3 years back to a 512 gig Samsung 850 SSD and it's still a fantastic drive at only about $270 today. This kept my machine going and going FAST. Video Card - I found a used NVidia 1070 on Craigslist for $250, but they are $380 new. A fantastic card that can do VR quite nicely, but for me, it ran three large monitors for years. Monitors - I ran a 30" Dell as my main monitor that I bought used nearly 10 years ago. It does require a DisplayPort to Dual-Link DVI active adapter but it's still an amazing 2560x1600 monitor even today. Memory - I started at 16 gigs and upgraded to 24 gigs when memory got cheaper.

All this adds up to me running the same first generation i7 processor up until 2018. And frankly, I probably could have gone another 3-5 years happily.

So why upgrade? I was gaming more and more as well as using my HTC Vive Pro and while the 1070 was great (although always room for improvement) I was pushing the original Processor pretty hard. On the development side, I have been running somewhat large distributed systems with Docker for Windows and Kubernetes, again, pushing memory and CPU pretty hard.

Ultimately however, price/performance for build-your-own PCs got to a reasonable place plus the ubiquity of 4k displays at reasonable costs made me think I could build a machine that would last me a minimum of 5 years, if not another 7.

Specifications

I bought my monitors from Dell directly and the PC parts from NewEgg.com. I named my machine IRONHEART after Marvel's Riri Williams.

Intel Core i9-7900X 10-Core 3.3 Ghz Desktop Processor - I like this processor for a few reasons. Yes, I'm an Intel fan, but I like that it has 44 PCI Express lanes (that's a lot) which means given I'm not running SLI with my video card, I'll have MORE than enough bandwidth for any peripherals I can throw at this machine. Additionally, it's caching situation is nuts. There's 640k L1, 10 MEGS L2, and 13.8 MEGS L3. 640 ought to be enough for anyone, right? ;) It's also got 20 logical processors plus Intel Turbo Boost Max that will move specific cores to 4.5GHz as needed, up from the base 3.3Ghz freq. It can also support up to 128 GB of RAM, although I'll start with 32gigs it's nice to have the room to grow. 288-pin DDR4 3200Mhz (PC4 25600) Memory  4 x 8G - These also have a fun lighting effect, and since my case is clear why not bling it out a little? ASUS ROG STRIX LGA2066 X299 ATX Motherboard - Good solid board with built in BT and Wifi, an M.2 heatsink included, 3x PCIe 3.0 x16 SafeSlots (supports triple @ x16/x16/x8), 1x PCIe 3.0 x4, 2x PCIe 3.0 x1 and a Max of 128 gigs of RAM. It also has 8x USB 3.1s and a USB C which is nice. Corsair Hydro Series H100i V2 Extreme Performance Water/Liquid CPU Cooler - My last PC had a heat sink you could see from space. It was massive and unruly. This Cooler/Fan combo mounts cleanly and then sits at the top of the case. It opens up a TON of room and looks fantastic. I really like everything Corsair does. WD Black 512GB Performance SSD - M.2 2280 PCIe NVMe Solid State Drive - It's amazing how cheap great SSDs are and I felt it was time to take it to the next level and try M.2 drives. M.2 is the "next generation form factor" for drives and replaces mSATA. M.2 SSDs are tiny and fast. This drive can do as much as 2gigs a second as much as 3x the speed of a SATA SSD. And it's cheap. CORSAIR Crystal 570X RGB Tempered Glass, Premium ATX Mid Tower Case, White - I flipping love this case. It's white and clear, but mostly clear. The side is just a piece of tempered glass. There's three RGB LED fans in the front (along with the two I added on the top from the cooler, and one more in the back) and they all are software controllable. The case also has USB ports on top which is great since it's sitting under my clear glass desk. It is very well thought out and includes many cable routing channels so your cables can be effectively invisible. Highly recommended.
Clear white case The backside of the clear white corsair case Corsair 120mm RGB LED Fans - Speaking of fans, I got this three pack bringing the total 120mm fan count to 6 (7 if you count the GPU fan that's usually off) TWO Anker 10 Port 60W USB hubs. I have a Logitech Brio 4k camera, a Peavey PV6 USB Mixer, and a bunch of other USB3 devices like external hard drives, Xbox Wireless Adapter and the like so I got two of these fantastic Hubs and double-taped them to the desk above the case. ASUS ROG GeForce GTX 1080 Ti 11 gig Video Card - This was arguably over the top but in this case I treated myself. First, I didn't want to ever (remember my 5 year goal) sweat video perf. I am/was very happy with my 1070 which is less than half the price, but as I've been getting more into VR, the NVidia 1070 can struggle a little. Additionally, I set the goal to drive 3 4k monitors at 60hz with zero issues, and I felt that the 1080 was the a solid choice. THREE Dell Ultra HD 4k Monitors P2715Q 27" - My colleague Damian LOVES these monitors. They are an excellent balance in size and cost and are well-calibrated from the factory. They are a full 4k and support DisplayPort and HDMI 2.0. Remember that my NVidia card has 2 DisplayPorts and 2 HDMI ports but I want to drive 3 monitors and 1 Vive Pro? I run the center Monitor off DisplayPort and the left and right off HDMI 2.0. NOTE: The P2415Q and P2715Q both support HDMI 2.0 but it's not enabled from the factory. You'll need to enable HDMI 2.0 in the menus (read the support docs) and use a high-speed HDMI cable. Otherwise you'll get 4k at 30hz and that's really a horrible experience. You want 60hz for work at least. NOTE: When running the P2715Q off DisplayPort from an NVidia card you might initially get an output color format of YCbCr 4:2:2 which will make the anti-aliased text have a colored haze while the HDMI 2.0 displays look great with RGB color output. You'll need to go into the menus of the display itself and set the Input Color Format to RGB *and* also into the NVidia display settings after turning the monitor and and off to get it to stick. Otherwise you'll find the NVidia Control Panel will reset to the less desirable YCbCr422 format causing one of your monitors to look different than the others. Last note, sometimes Windows will say that a DisplayPort monitor is running at 59Hz. That's almost assuredly a lie. Believe your video card.
Three monitors all running 4k 60hz What about perf?

Developers develop, right? A nice .NET benchmark is to compile Orchard Core both "cold" and "warm." I use .NET Core 2.1 downloaded from http://www.dot.net

Orchard is a fully-featured CMS with 143 projects loaded into Visual Studio. MSBUILD and .NET Core in 2.1 support both parallel and incremental builds.

A warm build of Orchard Core on IRONHEART takes just under 10 seconds. UPDATE: With overclock and tuing it builds in 7.39 seconds. My Surface Pro 3 builds it warm in 62 seconds. A totally cold build (after a dotnet clean) on IRONHEART takes 33.3 seconds. UPDATE: With overclock and tuning it builds in 21.2 seconds. My Surface Pro 3 builds it cold in 2.4 minutes.

Additionally, the CPUs in this case weren't working at full speed very long. This may be as fast as these 143 projects can be built. Note also that Visual Studio/MSBuild will use as many processors as it your system can handle. In this case it's using 20 procs.
MSBuild building Orchard Core across 20 logical processors

I can choose to constrain things if I think the parallelism is working against me, for example here I can try just with 4 processors. In my testing it doesn't appear that spreading the build across 20 processors is a problem. I tried just 10 (physical processors) and it builds in 12 seconds. With 20 processors (10 with hyperthreading, so 20 logical) it builds in 9.6 seconds so there's clearly a law of diminishing returns here.

dotnet build /maxcpucount:4

Building Orchard Core in 10 seconds

Regardless, My podcast site builds in less than 2 seconds on my new machine which makes me happy. I'm thrilled with this new machine and I hope it lasts me for many years.

PassMark

I like real world benchmarks, like building massive codebases and reading The Verge with an AdBlocker off, but I did run PassMark.

Passmark 98% percentile UPDATE with Overclocking

I did some modest overclocking to about 4.5Gz as well as some Fan Control and temperature work, plus I'm trying it with Intel Turbo Boost Max turned off and here's the updated Passmark taking the the machine into the 99% percentile.

Overall 6075 -> 7285 CPU 19842 -> 23158 Disk 32985 -> 42426 2D Mark 724 -> 937 (not awesome) 3D Mark (originally failed with a window resize) -> 15019 Memory 2338 -> 2827 (also not awesome) I still feel I may be doing something wrong here with memory. If I turn XMP on for memory those scores go up but then the CPU goes to heck. Passmark of 7285, now 99% percentile Now you!

Why don't you go get .NET Core 2.1 and clone Orchard Core from https://github.com/OrchardCMS/OrchardCore and run this in PowerShell

measure-command { dotnet build }

and let me know in the comments how fast your PC is with both cold and warm builds!

GOTCHAS: Some of you are telling me you're getting warm builds of 4 seconds. I don't believe you! ;) Be sure to run without "measure-command" and make sure you're not running a benchmark on a failed build! My overlocked BUILD now does 7.39 seconds warm.

NOTE: I  have an affiliate relationship with NewEgg and Amazon so if you use my links to buy something I'll make a small percentage and you're supporting this blog! Thanks!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

11 essential characteristics for being a good technical advocate or interviewer

Aug 8, 2018

Description:

14265784357_5b2773e123_oI was talking to my friend Rob Caron today. He produces Azure Friday with me - it's our weekly video podcast on Azure and the Cloud. We were talking about the magic for a successful episode, but then realized the ingredients that Rob came up with were generic enough that they were the essential for anyone who is teaching or advocating for a technology.

Personally I don't believe in "evangelism" in a technical context and I dislike the term "Technology Evangelism." Not only does it evoke unnecessary zealotry but it also implies that your religion technology is not only what's best for someone, but that it's the only solution. Java people shouldn't try to convert PHP people. That's all nonsense, of course. I like the word "advocate" because you're (hopefully) advocating for the right solution regardless of technology.

Here's the 11 herbs and spices that are needed for a great technical talk, a good episode of a podcast or show, or a decent career talking and teaching about tech.

Empathy for the guest – When talking to another person, never let someone flounder and fail – compensate when necessary so they are successful. Empathy for the audience – Stay conscious that you're delivering an talk/episode/post that people want to watch/read. Improvisation – Learn how to think on your feet and keep the conversation going (“Yes, and…”) Consider ComedySportz or other mind exercises. Listening – Don't just wait to for your turn to speak, just to say something, and never interrupt to say it. Be present and conscious and respond to what you’re hearing Speaking experience – Do the work. Hundreds of talks. Hundreds of interviews. Hundreds of shows. This ain’t your first rodeo. Being good means hard work and putting in the hours, over years, whether it's 10 people in a lunch presentation or 2000 people in a keynote, you know what to articulate. Technical experience – You have to know the technology. Strive to have context and personal experiences to reference. If you've never built/shipped/deployed something real (multiple times) you're just talking. Be a customer – You use the product, every day, and more than just to demo stuff. Run real sites, ship real apps, multiple times. Maintain sites, have sites go down and wake up to fix them. Carry the proverbial pager. Physical mannerisms – Avoid having odd personal ticks and/or be conscious of your performance on video. I know what my ticks are and I'm always trying to correct them. It's not self-critical, it's self-aware. Personal brand – I'm not a fan of "personal branding" but here's how I think of it. Show up. (So important.) You’re a known quantity in the community. You're reliable and kind. This lends credibility to your projects. Lend your voice and amplify others. Be yourself consistently and advocate for others, always. Confidence – Don't be timid in what you have to say BUT be perfectly fine with saying something that the guest later corrects. You're NOT the smartest person in the room. It's OK just to be a person in the room. Production awareness – Know how to ensure everything is set to produce a good presentation/blog/talk/video/sample (font size, mic, physical blocking, etc.) Always do tech checks. Always.

These are just a few tips but they've always served me well. We've done 450 episodes of Azure Friday and I've done nearly 650 episodes of the Hanselminutes Tech Podcast. Please Subscribe!

Related Links VIDEO: How to get started with technical public speaking! 11 Top Tips for a Successful Technical Presentation Technical Presentations: Be Prepared for Absolute Chaos Tips for Preparing for a Technical Presentation

* pic from stevebustin used under CC.

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Developing locally with ASP.NET Core under HTTPS, SSL, and Self-Signed Certs

Aug 2, 2018

Description:

Last week on Twitter @getify started an excellent thread pointing out that we should be using HTTPS even on our local machines. Why?

You want your local web development set up to reflect your production reality as much as possible. URL parsing, routing, redirects, avoiding mixed-content warnings, etc. It's very easy to accidentally find oneself on http:// when everything in 2018 should be under https://.

I'm using ASP.NET Core 2.1 which makes local SSL super easy. After installing from http://dot.net I'll "dotnet new razor" in an empty folder to make a quick web app.

Then, when I "dotnet run" I see two URLs serving pages:

C:\Users\scott\Desktop\localsslweb> dotnet run
Hosting environment: Development
Content root path: C:\Users\scott\Desktop\localsslweb
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

One is HTTP over port 5000 and the other is HTTPS over 5001. However, if I hit https://localhost:5001, I may see an error:

Your connection to this site is not secure

That's because this is an untrusted SSL cert that was generated locally:

Untrusted cert

There's a dotnet global tool built into .NET Core 2.1 to help with certs at dev time, called "dev-certs."

C:\Users\scott> dotnet dev-certs https --help

Usage: dotnet dev-certs https [options]

Options:
-ep|--export-path Full path to the exported certificate
-p|--password Password to use when exporting the certificate with the private key into a pfx file
-c|--check Check for the existence of the certificate but do not perform any action
--clean Cleans all HTTPS development certificates from the machine.
-t|--trust Trust the certificate on the current platform
-v|--verbose Display more debug information.
-q|--quiet Display warnings and errors only.
-h|--help Show help information

I just need to run "dotnet dev-certs https --trust" and I'll get a pop up asking if I want to trust this localhost cert..

You want to trust this local cert?

On Windows it'll get added to the certificate store and on Mac it'll get added to the keychain. On Linux there isn't a standard way across distros to trust the certificate, so you'll need to perform the distro specific guidance for trusting the development certificate.

Close your browser and open up again at https://localhost:5001 and you'll see a trusted "Secure" badge in your browser.

Secure

Note also that by default HTTPS redirection is included in ASP.NET Core, and in Production it'll use HTTP Strict Transport Security (HSTS) as well, avoiding any initial insecure calls.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseCookiePolicy();

app.UseMvc();
}

That's it. What's historically been a huge hassle for local development is essentially handled for you. Given that Chrome is marking http:// sites as "Not Secure" as of Chrome 68 you'll want to consider making ALL your sites Secure by Default. I wrote up how to get certs for free with Azure and Let's Encrypt.

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

One click deploy for MakeCode and the amazing AdaFruit Circuit Playground Express

Jul 31, 2018

Description:

Circuit Playground Express and CrickitThere's a ton of great open source hardware solutions out there. Often they're used to teach kids how to code, but their also fun for adults to learn about hardware!

Ultimately that "LED Moment" can be the start of a love affair with open source hardware and software! Arduino is great, as is Arduino talking to the Cloud! If that's too much or too many moving parts, you always start small with other little robot kits for kids.

Recently my 10 year old and I have been playing with the Circuit Playground Express from Adafruit. We like it because it supports not just block-based programming and JavaScript via MakeCode (more on that in a moment) but you can also graduate to Circuit Python. Want to be even more advanced/ You can also use the Arduino IDE to talk to Circuit Playground Express. It's quickly becoming our favorite board. Be sure to get the board, some batteries and a holder, as well as a few alligator clips.

The Circuit Playground Express board is round and has alligator-clip pads around it so you don't have to solder to get started. It has as bunch of sensors for light, temperature, motion, sound, as well as an IR receiver and transmitter and LEDs for visual output. There's a million things you can do with it. This summer Microsoft Research is doing a project a week you can do with the kids in your life with Make Code!

I think the Circuit Playground Express is excellent by itself, but I like that I can stack it on top of the AdaFruit Crickit to make a VERY capable robotics platform. It's an ingenious design where three screws and metal standoffs connect the Crickit to the Circuit Playground and provide a bus for power and communication. The 10 year old wants to make a BattleBot now.

Sitting architecturally on top of all this great hardware is the open source Microsoft Make Code development environment. It's amazing and more people should be talking about it. MakeCode works with LEGO Mindstorms EV3, micro:bit, Circuit Playground Express, Minecraft, Cue robots, Chibichips, and more. The pair of devices is truly awesome.

Frankly I'm blown away at how easy it is and how easily my kids were productive. The hardest part of the whole thing was the last step where they need to copy the compiled code to the Circuit Playground Express. The editor is all online at GitHub https://github.com/Microsoft/pxt and you can run it locally if you like but there's no reason to unless you're developing new packages.

We went to https://makecode.adafruit.com/ for the Circuit Playground Express. We made a new project (and optionally added the Crickit board blocks as an extension) and then got to work. The 10 year old followed a tutorial and made a moisture sensor that uses an alligator clip and a nail to check if our plants need to be watered! (If they do, it beeps and turns red!)

You can see the code here as blocks...

The MakeCode IDE

or see the same code as JavaScript!

let Soil_reading = 0 let dry_value = 0 dry_value = 1500 light.setBrightness(45) light.setAll(0x00ff00) forever(function () { Soil_reading = input.pinA1.value() console.logValue("Soil reading", Soil_reading) if (Soil_reading < dry_value) { light.setAll(0xff0000) music.playTone(262, music.beat(BeatFraction.Half)) } else { light.clear() } pause(__internal.__timePicker(2000)) })

When you've written your code, you just click DOWNLOAD and you'll get a "uf2" file.

Downloading compiled MakeCode UF2 files

Then the hardest part, you plug in the Circuit Playground Express via USB, it shows up as a Drive called "CPLAYBOOT," and you copy that file over. It's easy for techies, but a speed bump for kids.

Downloading compiled MakeCode UF2 files

It's really a genius process where they have removed nearly every obstacle in the hardware. After the file gets copied over (literally after the last byte is written) the device resets and starts running it.

The "Developer's Inner Loop" is as short as possible, so kudos to the team. Code, download, deploy, run/test, repeat.

This loop is fast and clever, but I wanted to speed it up a little so I wrote this little utility to automatically copy your MakeCode file to the Circuit Playground Express. Basically the idea is:

Associate my app with *.uf2 files When launched, look for a local drive labeled CPLAYBOOK and copy the uf2 file over to it.

That's it. It speeds up the experience and saves me a number of clicks. Sure there's batch file/powershell/script ways to do it but this wasn't hard.

static void Main(string[] args) { var sourceFile = args[0]; var drive = (from d in DriveInfo.GetDrives() where d.VolumeLabel == "CPLAYBOOT" select d.RootDirectory).FirstOrDefault(); if (drive == null) { Console.WriteLine("Press RESET on your Circuit Playground Express and try again!"); Environment.Exit(1); } Console.WriteLine($"Found Circuit Playground Express at {drive.FullName}"); File.Copy(sourceFile, Path.Combine(drive.FullName, Path.GetFileName(sourceFile))); }

Then I double click on the uf2 file and get this dialog and SCROLL DOWN and click "Look for another app on this PC." (They are making this hard because they want you to use a Store App, which I haven't made yet)

Selecting a custom app for UF2 files

Now I can just click my uf2 files in Windows Explorer and they'll automatically get deployed to my Circuit Playground Express!

Found Circuit Playground Express at D:\

You can find source here https://github.com/shanselman/MakeCodeLaunchAndCopy if you're a developer, just get .NET Core 2.1 and run my .cmd file on Windows to build it yourself. Feel free to make it a Windows Store App if you're an overachiever. Pull Requests appreciated ;)

Otherwise, get the a little release here https://github.com/shanselman/MakeCodeLaunchAndCopy/releases and unzip the contents into its own folder on Windows, go double-click a UF2 file and point Windows to the MakeCodeLaunchAndCopy.exe file and you're all set!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

SQL Server on Linux or in Docker plus cross-platform SQL Operations Studio

Jul 27, 2018

Description:

imageI recently met some folks that didn't know that SQL Server 2017 also runs on Linux but they really needed to know. They had a single Windows desktop and a single Windows Server that they were keeping around to run SQL Server. They had long-been a Linux shop and was now fully containerzed...except for this machine under Anna's desk. (I assume The Cloud is next...pro tip: Don't have important servers under your desk). You can even get a license first and decide on the platform later.

You can run SQL Server on a few Linux flavors...

Install on Red Hat Enterprise Linux Install on SUSE Linux Enterprise Server Install on Ubuntu

or, even better, run it on Docker...

Run on Docker

Of course you'll want to do the appropriate volume mapping to keep your database on durable storage. I'm digging being able to spin up a full SQL Server inside a container on my Windows machine with no install.

I've got Docker for Windows on my laptop and I'm using Shayne Boyer's "Docker Why" repo to make the point. Look at his sample DockerCompose that includes both a web frontend and a backend using SQL Server on Linux.

version: '3.0'
services:

mssql:
image: microsoft/mssql-server-linux:latest
container_name: db
ports:
- 1433:1433
volumes:
- /var/opt/mssql
# we copy our scripts onto the container
- ./sql:/usr/src/app
# bash will be executed from that path, our scripts folder
working_dir: /usr/src/app
# run the entrypoint.sh that will import the data AND sqlserver
command: sh -c ' chmod +x ./start.sh; ./start.sh & /opt/mssql/bin/sqlservr;'
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: P@$$w0rdP@$$w0rd

Note his starting command where he's doing an initial population of the database with sample data, then running sqlservr itself. The SQL Server on Linux Docker container includes the "sqlcmd" command line so you can set up the database, maintain it, etc with the same command line you've used on Windows. You can also configure SQL Server from Environment Variables so it makes it easy to use within Docker/Kubernetes. It'll take just a few minutes to get going.

Example:

/opt/mssql-tools/bin/sqlcmd -S localhost -d Names -U SA -P $SA_PASSWORD -I -Q "ALTER TABLE Names ADD ID UniqueIdentifier DEFAULT newid() NOT NULL;"

I cloned his repo (and I have .NET Core 2.1) and did a "docker-compose up" and boom, running a front end under Alpine and backend with SQL Server on Linux.

101→ C:\Users\scott> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e5b4dae93f6d namesweb "dotnet namesweb.dll" 38 minutes ago Up 38 minutes 0.0.0.0:57270->80/tcp, 0.0.0.0:44348->443/tcp src_namesweb_1
5ddffb76f9f9 microsoft/mssql-server-linux:latest "sh -c ' chmod +x ./…" 41 minutes ago Up 39 minutes 0.0.0.0:1433->1433/tcp mssql

Command lines are nice, but SQL Server is known for SQL Server Management Studio, a nice GUI for Windows. Did they release SQL Server on Linux and then expect everyone use Windows to manage it? I say nay nay! Check out the cross-platform and open source SQL Operations Studio, "a data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux." You can download SQL Operations Studio free here.

SQL Ops Studio is really impressive. Here I am querying SQL Server on Linux running within my Docker container on my Windows laptop.

SQL Ops Studio - Cross platform SQL management

As I'm digging in and learning how far cross-platform SQL Server has come, I also checked out the mssql extension for Visual Studio Code that lets you develop and execute SQL against any SQL Server. The VS Code SQL Server Extension is also open source!

Go check it SQL Server in Docker at https://github.com/Microsoft/mssql-docker and try Shayne's sample at https://github.com/spboyer/docker-why

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

Example Code - Opinionated ContosoUniversity on ASP.NET Core 2.0's Razor Pages

Jul 25, 2018

Description:

The best way to learn about code isn't just writing more code - it's reading code! Not all of it will be great code and much of it won't be the way you would do it, but it's a great way to expand your horizons.

In fact, I'd argue that most people aren't reading enough code. Perhaps there's not enough clean code bases to check out and learn from.

I was pleased to stumble on this code base from Jimmy Bogard called Contoso University at https://github.com/jbogard/ContosoUniversityDotNetCore-Pages.

There's a LOT of good stuff to read in this repo so I won't claim to have read it all or as deeply as I could. In fact, there's a good solid day of reading and absorbing here.However, here's some of the things I noticed and that I appreciate. Some of this is very "Jimmy" code, since it was written for and by Jimmy. This is a good thing and not a dig. We all collect patterns and make libraries and develop our own spins on architectural styles. I love that Jimmy collects a bunch of things he's created or contributed to over the years and put it into a nice clear sample for us to read. As Jimmy points out, there's a lot in https://github.com/jbogard/ContosoUniversityDotNetCore-Pages to explore:

CQRS pattern and MediatR AutoMapper for automatically mapping "left hand/right hand" objects Vertical slice architecture Razor Pages Fluent Validation and Shouldly HtmlTags object model for generating HTML  Entity Framework Core Clone and Build just works

A low bar, right? You'd be surprised how often I git clone someone's repository and they haven't tested it elsewhere. Bonus points for a build.ps1 that bootstraps whatever needs to be done. I had .NET Core 2.x on my system already and this build.ps1 got the packages I needed and built the code cleanly.

It's an opinioned project with some opinions. ;) And that's great, because it means I'll learn about techniques and tools that I may not have used before. If someone uses a tool that's not the "defaults" it may me that the defaults are lacking!

Build.ps1 is using a build script style taken from PSake, a powershell build automation tool. It's building to a folder called ./artifacts as as convention. Inside build.ps1, it's using Roundhouse, a Database Migration Utility for .NET using sql files and versioning based on source control http://projectroundhouse.org It's set up for Continuous Integration in AppVeyor, a lovely CI/CD system I use myself. It uses the Octo.exe tool from OctopusDeploy to package up the artifacts. Organized and Easy to Read

I'm finding the code easy to read for the most part. I started at Startup.cs to just get a sense of what middleware is being brought in.

public void ConfigureServices(IServiceCollection services) { services.AddMiniProfiler().AddEntityFramework(); services.AddDbContext<SchoolContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); services.AddAutoMapper(typeof(Startup)); services.AddMediatR(typeof(Startup)); services.AddHtmlTags(new TagConventions()); services.AddMvc(opt => { opt.Filters.Add(typeof(DbContextTransactionPageFilter)); opt.Filters.Add(typeof(ValidatorPageFilter)); opt.ModelBinderProviders.Insert(0, new EntityModelBinderProvider()); }) .SetCompatibilityVersion(CompatibilityVersion.Version_2_1) .AddFluentValidation(cfg => { cfg.RegisterValidatorsFromAssemblyContaining<Startup>(); }); } Here I can see what libraries and helpers are being brought in, like AutoMapper, MediatR, and HtmlTags. Then I can go follow up and learn about each one. MiniProfiler

I've always loved MiniProfiler. It's a hidden gem of .NET and it's been around being awesome forever. I blogged about it back in 2011! It sits in the corner of your web page and gives you REAL actionable details on how your site behaves and what the important perf timings are.

MiniProfiler is the profiler you didn't know you needed

It's even better with EF Core in that it'll show you the generated SQL as well! Again, all inline in your web site as you develop it.

inline SQL in MiniProfiler

Very nice.

Clean Unit Tests

Jimmy is using XUnit and has an IntegrationTestBase here with some stuff I don't understand, like SliceFixture. I'm marking this as something I need to read up on and research. I can't tell if this is the start of a new testing helper library, as it feels too generic and important to be in this sample.

He's using the CQRS "Command Query Responsibility Segregation" pattern. Here starts with a Create command, sends it, then does a Query to confirm the results. It's very clean and he's got a very isolated test.

[Fact] public async Task Should_get_edit_details() { var cmd = new Create.Command { FirstMidName = "Joe", LastName = "Schmoe", EnrollmentDate = DateTime.Today }; var studentId = await SendAsync(cmd); var query = new Edit.Query { Id = studentId }; var result = await SendAsync(query); result.FirstMidName.ShouldBe(cmd.FirstMidName); result.LastName.ShouldBe(cmd.LastName); result.EnrollmentDate.ShouldBe(cmd.EnrollmentDate); } FluentValidator

https://fluentvalidation.net is a helper library for creating clear strongly-typed validation rules. Jimmy uses it throughout and it makes for very clean validation code.public class Validator : AbstractValidator<Command> { public Validator() { RuleFor(m => m.Name).NotNull().Length(3, 50); RuleFor(m => m.Budget).NotNull(); RuleFor(m => m.StartDate).NotNull(); RuleFor(m => m.Administrator).NotNull(); } } Useful Extensions

Looking at a project's C# extension methods is a great way to determine what the author feels are gaps in the underlying included functionality. These are useful for returning JSON from Razor Pages!

public static class PageModelExtensions { public static ActionResult RedirectToPageJson<TPage>(this TPage controller, string pageName) where TPage : PageModel { return controller.JsonNet(new { redirect = controller.Url.Page(pageName) } ); } public static ContentResult JsonNet(this PageModel controller, object model) { var serialized = JsonConvert.SerializeObject(model, new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore }); return new ContentResult { Content = serialized, ContentType = "application/json" }; } } PaginatedList

I've always wondered what to do with helper classes like PaginatedList. Too small for a package, too specific to be built-in? What do you think?

public class PaginatedList<T> : List<T> { public int PageIndex { get; private set; } public int TotalPages { get; private set; } public PaginatedList(List<T> items, int count, int pageIndex, int pageSize) { PageIndex = pageIndex; TotalPages = (int)Math.Ceiling(count / (double)pageSize); this.AddRange(items); } public bool HasPreviousPage { get { return (PageIndex > 1); } } public bool HasNextPage { get { return (PageIndex < TotalPages); } } public static async Task<PaginatedList<T>> CreateAsync(IQueryable<T> source, int pageIndex, int pageSize) { var count = await source.CountAsync(); var items = await source.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToListAsync(); return new PaginatedList<T>(items, count, pageIndex, pageSize); } }

I'm still reading all the source I can. Absorbing what resonates with me, considering what I don't know or understand and creating a queue of topics to read about. I'd encourage you to do the same! Thanks Jimmy for writing this large sample and for giving us some code to read and learn from!

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

AltCover and ReportGenerator give amazing code coverage on .NET Core

Jul 20, 2018

Description:

I'm continuing to explore testing and code coverage on open source .NET Core. Earlier this week I checked out coverlet. There is also the venerable OpenCover and there's some cool work being done to get OpenCover working with .NET Core, but it's Windows only.

Today, I'm exploring AltCover by Steve Gilham. There are coverage tools that use the .NET Profiling API at run-time, instead, AltCover weaves IL for its coverage.

As the name suggests, it's an alternative coverage approach. Rather than working by hooking the .net profiling API at run-time, it works by weaving the same sort of extra IL into the assemblies of interest ahead of execution. This means that it should work pretty much everywhere, whatever your platform, so long as the executing process has write access to the results file. You can even mix-and-match between platforms used to instrument and those under test.

AltCover is a NuGet package but it's also available as .NET Core Global Tool which is awesome.

dotnet tool install --global altcover.global

This makes "altcover" a command that's available everywhere without adding it to my project.

That said, I'm going to follow the AltCover Quick Start and see how quickly I can get it set up!

I'll Install into my test project hanselminutes.core.tests

dotnet add package AltCover

and then rundotnet test /p:AltCover=true

90.1% Line Coverage, 71.4% Branch CoverageCool. My tests run as usual, but now I've got a coverage.xml in my test folder. I could also generate LCov or Cobertura reports if I'd like. At this point my coverage.xml is nearly a half-meg! That's a lot of good information, but how do I see  the results in a human readable format?

This is the OpenCover XML format and I can run ReportGenerator on the coverage file and get a whole bunch of HTML files. Basically an entire coverage mini website!

I downloaded ReportGenerator and put it in its own folder (this would be ideal as a .NET Core global tool).

c:\ReportGenerator\ReportGenerator.exe -reports:coverage.xml -targetdir:./coverage

Make sure you use a decent targetDir otherwise you might end up with dozens of HTML files littered in your project folder. You might also consider .gitignoring the resulting folder and coverage file. Open up index.htm and check out all this great information!

Coverage Report says 90.1% Line Coverage

Note the Risk Hotspots at the top there! I've got a CustomPageHandler with a significant NPath Complexity and two Views with a significant Cyclomatic Complexity.

Also check out the excellent branch coverage as expressed here in the results of the coverage report. You can see that EnableAutoLinks was always true, so I only ever tested one branch. I might want to add a negative test here and explore if there's any side effects with EnableAutoLinks is false.

Branch Coverage

Be sure to explore AltCover and its Full Usage Guide. There's a number of ways to run it, from global tools, dotnet test, MSBuild Tasks, and PowerShell integration!

For use cases, see Use Cases. For modes of operation, see Modes of Operation. For driving AltCover from dotnet test, see dotnet test integration. For driving AltCover from MSBuild, see MSBuild Tasks. For driving AltCover and associated tools with Windows PowerShell or PowerShell Core, see PowerShell integration.

There's a lot of great work here and it took me literally 10 minutes to get a great coverage report with AltCover and .NET Core. Kudos to Steve on AltCover! Head over to https://github.com/SteveGilham/altcover and give it a STAR, file issues (be kind) or perhaps offer to help out! And most of all, share cool Open Source projects like this with your friends and colleagues.

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.
© 2018 Scott Hanselman. All rights reserved.
     

.NET Core Code Coverage as a Global Tool with coverlet

Jul 18, 2018

Description:

Last week I blogged about "dotnet outdated," an essential .NET Core "global tool" that helps you find out what NuGet package reference you need to update.

.NET Core Global Tools are really taking off right now. They are meant for devs - this isn't a replacement for chocolatey or apt-get - this is more like npm's global developer tools. They're putting together a better way to find and identify global tools, but for now Nate McMaster has a list of some great .NET Core Global Tools on his GitHub. Feel free to add to that list!

.NET tools can be installed like this:

dotnet tool install -g <package id>

So for example:

C:\Users\scott> dotnet tool install -g dotnetsay
You can invoke the tool using the following command: dotnetsay
Tool 'dotnetsay' (version '2.1.4') was successfully installed.
C:\Users\scott> dotnetsay

Welcome to using a .NET Core global tool!

You know, all the important tools. Seriously, some are super fun. ;)

Coverlet is a cross platform code coverage tool that's in active development. In fact, I automated my build with code coverage for my podcast site back in March. I combined VS Code, Coverlet, xUnit, plus these Visual Studio Code extensions

Coverage Gutters - Reads in the lcov.info file (name matters) and highlights lines with color .NET Core Test Explorer - Discovers tests and gives you a nice explorer.

for a pretty nice experience! All free and open source.

I had to write a little PowerShell script because the "dotnet test" command for testing my podcast site with coverlet got somewhat unruly. Coverlet.msbuild was added as a package reference for my project.

dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests

I heard last week that coverlet had initial support for being a .NET Core Global Tool, which I think would be convenient since I could use it anywhere on any project without added references.

dotnet tool install --global coverlet.console

At this point I can type "Coverlet" and it's available anywhere.

I'm told this is an initial build as a ".NET Global Tool" so there's always room for constructive feedback.

From what I can tell, I run it like this:

coverlet .\bin\Debug\netcoreapp2.1\hanselminutes.core.tests.dll --target "dotnet" --targetargs "test --no-build"

Note I have to pass in the already-built test assembly since coverlet instruments that binary and I need to say "--no-build" since we don't want to accidentally rebuild the assemblies and lose the instrumentation.

Coverlet can generate lots of coverage formats like opencover or lcov, and by default gives a nice ASCII table:

88.1% Line Coverage in Hanselminutes.core

I think my initial feedback (I'm not sure if this is possible) is smarter defaults. I'd like to "fall into the Pit of Success." That means, even I mess up and don't read the docs, I still end up successful.

For example, if I type "coverlet test" while the current directory is a test project, it'd be nice if that implied all this as these are reasonable defaults.

.\bin\Debug\whatever\whatever.dll --target "dotnet" --targetargs "test --nobuild"

It's great that there is this much control, but I think assuming "dotnet test" is a fair assumption, so ideally I could go into any folder with a test project and type "coverlet test" and get that nice ASCII table. Later I'd be more sophisticated and read the excellent docs as there's lots of great options like setting coverage thresholds and the like.

I think the future is bright with .NET Global Tools. This is just one example! What's your favorite?

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Lynx is dead - Long live Browsh for text-based internet browsing

Jul 13, 2018

Description:

The standard for browsing the web over a text-=based terminal is Lynx, right? It's the legendary text web browser that you can read about at https://lynx.invisible-island.net/ or, even better, run right now with

docker run --rm -it nbrown/lynx lynx http://hanselman.com/

Awesome, right? But it's text. Lynx runs alt-text rather than images, and doesn't really take advantage of modern browser capabilities OR modern terminal capabilities.

Enter Browsh! https://www.brow.sh/

Browsh is a fully-modern text-based browser. It renders anything that a modern browser can; HTML5, CSS3, JS, video and even WebGL. Its main purpose is to be run on a remote server and accessed via SSH/Mosh

Imagine running your browser on a remote machine connected to full power while ssh'ing into your hosted browsh instance. I don't know about you, but my laptop is currently using 2 gigs of RAM for Chrome and it's basically just all fans. I might be able to get 12 hours of battery life if I hung out in tmux and used browsh! Not to mention the bandwidth savings. If I'm tethered or overseas on a 3G network, I can still get a great browsing experience and just barely sip data.

Browsing my blog with Browsh

You can even open new tabs! Check out the keybindings! You gotta try it. Works great on Windows 10 with the new console. Just run this one Docker command:

docker run -it --rm browsh/browsh

If you think this idea is silly, that's OK. I think it's brilliant and creative and exactly the kind of clever idea the internet needs. This solves an interesting browser in an interesting way...in fact it returns us back to the "dumb terminal" days, doesn't it?

There was a time when I my low-power machine waited for text from a refrigerator-sized machine. The fridge did the work and my terminal did the least.

Today my high-powered machine waits for text from another high-powered machine and then struggles to composite it all as 7 megs of JavaScript downloads from TheVerge.com. But I'm not bitter. ;)

Check out my podcast site on Browsh. Love it.

Tiny pixelated heads made with ASCII

If you agree that Browsh is amazing and special, consider donating! It's currently maintained by just one person and they just want $1000 a month on their Patreon to work on Browsh all the time! Go tell Tom on Twitter that you think is special, then give him some coins. What an exciting and artful project! I hope it continues!

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

NuKeeper for automated NuGet Package Reference Updates on Build Servers

Jul 10, 2018

Description:

Last week I looked at "dotnet outdated," a super useful .NET Core Global Tool for keeping a project's NuGet packages up to date. Since then I've discovered there's a whole BUNCH of great projects solving different aspects of the "minor version problem." I love this answer "Why?" from the NuKeeper (inspired by Greenkeeper) project with emphasis mine. NuKeeper will check for updates AND try to update your references for you! Why not automate the tedious!

NuGet package updates are a form of change that should be deployed, and we likewise want to change the cycle from "NuGet package updates are infrequent and contain lots of package changes, therefore NuGet package updates are hard and dangerous..." to "NuGet package updates are frequent and contain small changes, therefore NuGet package updates are easy and routine...".

Certainly no one is advocating updating the major versions of your dependent NuGet packages, but small compatible bug fixes come often and are often missed. Including a tool to discover - and optionally apply - these changes in a CI/CD (Continuous Integration/Continuous Deployment) pipeline can be a great timesaver.

Why do we deploy code changes frequently but seldom update NuGet packages?

Good question!

NuKeeper

NuKeeper is a .NET tool as well that you can install safely with:

dotnet tool install --global NuKeeper

Here it is running on my regularly updated podcast website that is running ASP.NET Core 2.1:

NuKeeper says I have 3 packages to update

Looks like three of my packages are out of date. NuKeeper shows what version I have and what I could update to, as well as how long an update has been available.

You can also restrict your updates by policy, so "age=3w" for packages over 3 weeks old (so you don't get overly fresh updates) or "change=minor" or "change=patch" if you trust your upstream packages to not break things in patch releases, etc.

NuKeeper is picking up steam and while (as of the time of this writing) its command line parameter style is a little unconventional, Anthony Steele and the team is very open to feedback with many improvements already in progress as this project matures!

The update functionality is somewhat experimental and currently does 1 update per local run, but I'm really enjoying the direction NuKeeper is going!

Automatic NuGet Updates via Pull Request

NuKeeper has a somewhat unique and clever feature called Repository Mode in that it can automatically issue a Pull Request against your repository with the changes needed to update your NuGet references. Check out this example PullRequest!

Anthony - the lead developer - points out that ideally you'd set up NuKeeper to send PRs for you. Automatic PRs are NuKeepers primary goal and use case!

The NuKeeperBot has automatically issued a PR with a list of packages to update

Again, it's early days, but between NuKeeper and "dotnet outdated," I'm feeling more in control of my package references than ever before! What are YOU using?

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

Using Flurl to easily build URLs and make testable HttpClient calls in .NET

Jun 23, 2018

Description:

FlurlI posted about using Refit along with ASP.NET Core 2.1's HttpClientFactory earlier this week. Several times when exploring this space (both on Twitter, googling around, and in my own blog comments) I come upon Flurl as in, "Fluent URL."

Not only is that a killer name for an open source project, Flurl is very active, very complete, and very interesting. By the way, take a look at the https://flurl.io/ site for a great example of a good home page for a well-run open source library. Clear, crisp, unambiguous, with links on how to Get It, Learn It, and Contribute. Not to mention extensive docs. Kudos!

Flurl is a modern, fluent, asynchronous, testable, portable, buzzword-laden URL builder and HTTP client library for .NET.

You had me at buzzword-laden! Flurl embraces the .NET Standard and works on .NET Framework, .NET Core, Xamarin, and UWP - so, everywhere.

To use just the Url Builder by installing Flurl. For the kitchen sink (recommended) you'll install Flurl.Http. In fact, Todd Menier was kind enough to share what a Flurl implementation of my SimpleCastClient would look like! Just to refresh you, my podcast site uses the SimpleCast podcast hosting API as its back-end.

My super basic typed implementation that "has a" HttpClient looks like this. To be clear this sample is WITHOUT FLURL.

public class SimpleCastClient
{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com"); //Could also be set in Startup.cs
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
try
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
_logger.LogWarning($"HttpClient: Loading {episodesUrl}");
var res = await _client.GetAsync(episodesUrl);
res.EnsureSuccessStatusCode();
return await res.Content.ReadAsAsync<List<Show>>();
}
catch (HttpRequestException ex)
{
_logger.LogError($"An error occurred connecting to SimpleCast API {ex.ToString()}");
throw;
}
}
}

Let's explore Tim's expression of the same client using the Flurl library!

Not we set up a client in Startup.cs, use the same configuration, and also put in some nice aspect-oriented events for logging the befores and afters. This is VERY nice and you'll note it pulls my cluttered logging code right out of the client!

// Do this in Startup. All calls to SimpleCast will use the same HttpClient instance.
FlurlHttp.ConfigureClient(Configuration["SimpleCastServiceUri"], cli => cli
.Configure(settings =>
{
// keeps logging & error handling out of SimpleCastClient
settings.BeforeCall = call => logger.LogWarning($"Calling {call.Request.RequestUri}");
settings.OnError = call => logger.LogError($"Call to SimpleCast failed: {call.Exception}");
})
// adds default headers to send with every call
.WithHeaders(new
{
Accept = "application/json",
User_Agent = "MyCustomUserAgent" // Flurl will convert that underscore to a hyphen
}));

Again, this set up code lives in Startup.cs and is a one-time thing. The Headers, User Agent all are dealt with once there and in a one-line chained "fluent" manner.

Here's the new SimpleCastClient with Flurl.

using Flurl;
using Flurl.Http;

public class SimpleCastClient
{
// look ma, no client!
private readonly string _baseUrl;
private readonly string _apiKey;

public SimpleCastClient(IConfiguration config)
{
_baseUrl = config["SimpleCastServiceUri"];
_apiKey = config["SimpleCastAPIKey"];
}

public Task<List<Show>> GetShows()
{
return _baseUrl
.AppendPathSegment("v1/podcasts/shownum/episodes.json")
.SetQueryParam("api_key", _apiKey)
.GetJsonAsync<List<Show>>();
}
}

See in GetShows() how we're also using the Url Builder fluent extensions in the Flurl library. See that _baseUrl is actually a string? We all know that we're supposed to use System.Uri but it's such a hassle. Flurl adds extension methods to strings so that you can seamlessly transition from the strings (that we all use) representations of Urls/Uris and build up a Query String, and in this case, a GET that returns JSON.

Very clean!

Flurl also prides itself on making HttpClient testing easier as well. Here's a more sophisticated example of a library from their site:

// Flurl will use 1 HttpClient instance per host
var person = await "https://api.com"
.AppendPathSegment("person")
.SetQueryParams(new { a = 1, b = 2 })
.WithOAuthBearerToken("my_oauth_token")
.PostJsonAsync(new
{
first_name = "Claire",
last_name = "Underwood"
})
.ReceiveJson<Person>();

This example is doing a post with an anonymous object that will automatically turn into JSON when it hits the wire. It also receives JSON as the response. Even the query params are created with a C# POCO (Plain Old CLR Object) and turned into name=value strings automatically.

Here's a test Flurl-style!

// fake & record all http calls in the test subject
using (var httpTest = new HttpTest()) {
// arrange
httpTest.RespondWith(200, "OK");
// act
await sut.CreatePersonAsync();
// assert
httpTest.ShouldHaveCalled("https://api.com/*")
.WithVerb(HttpMethod.Post)
.WithContentType("application/json");
}

Flurl.Http includes a set of features to easily fake and record HTTP activity. You can make a whole series of assertions about your APIs:

httpTest.ShouldHaveCalled("http://some-api.com/*")
.WithVerb(HttpMethd.Post)
.WithContentType("application/json")
.WithRequestBody("{\"a\":*,\"b\":*}") // supports wildcards
.Times(1)

All in all, it's an impressive set of tools that I hope you explore and consider for your toolbox! There's a ton of great open source like this with .NET Core and I'm thrilled to do a small part to spread the word. You should to!

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

Using ASP.NET Core 2.1's HttpClientFactory with Refit's REST library

Jun 20, 2018

Description:

Strong by Lucyb_22 used under Creative Commons from FlickrWhen I moved my podcast site over to ASP.NET Core 2.1 I also started using HttpClientFactory and wrote up my experience. It's a nice clean way to centralize both settings and policy for your HttpClients, especially if you're using a lot of them to talk to a lot of small services.

Last year I explored Refit, an automatic type-safe REST library for .NET Standard. It makes it super easy to just declare the shape of a client and its associated REST API with a C# interface:

public interface IGitHubApi
{
[Get("/users/{user}")]
Task<User> GetUser(string user);
}

and then ask for an HttpClient that speaks that API's shape, then call it. Fabulous.

var gitHubApi = RestService.For<IGitHubApi>("https://api.github.com");

var octocat = await gitHubApi.GetUser("octocat");

But! What does Refit look like and how does it work in an HttpClientFactory-enabled world? Refit has recently been updated with first class support for ASP.NET Core 2.1's HttpClientFactory with the Refit.HttpClientFactory package.

Since you'll want to centralize all your HttpClient configuration in your ConfigureServices method in Startup, Refit adds a nice extension method hanging off of Services.

You add a RefitClient of a type, then add whatever other IHttpClientBuilder methods you want afterwards:

services.AddRefitClient<IWebApi>()
.ConfigureHttpClient(c => c.BaseAddress = new Uri("https://api.example.com"));
// Add additional IHttpClientBuilder chained methods as required here:
// .AddHttpMessageHandler<MyHandler>()
// .SetHandlerLifetime(TimeSpan.FromMinutes(2));

Of course, then you can just have your HttpClient automatically created and passed into the constructor. You'll see in this sample from their GitHub that you get an IWebAPI (that is, whatever type you want, like my IGitHubApi) and just go to town with a strongly typed interfaces of an HttpClient with autocomplete.

public class HomeController : Controller
{
public HomeController(IWebApi webApi)
{
_webApi = webApi;
}

private readonly IWebApi _webApi;

public async Task<IActionResult> Index(CancellationToken cancellationToken)
{
var thing = await _webApi.GetSomethingWeNeed(cancellationToken);

return View(thing);
}
}

Refit is easy to use, and even better with ASP.NET Core 2.1. Go get Refit and try it today!

* Strong image by Lucyb_22 used under Creative Commons from Flickr

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

Penny Pinching in the Cloud: Deploying Containers cheaply to Azure

Jun 15, 2018

Description:

imageI saw a tweet from a person on Twitter who wanted to know the easiest and cheapest way to get an Web Application that's in a Docker Container up to Azure. There's a few ways and it depends on your use case.

Some apps aren't web apps at all, of course, and just start up in a stateless container, do some work, then exit. For a container like that, you'll want to use Azure Container Instances. I did a show and demo on this for Azure Friday.

Azure Container Instances

Using the latest Azure CLI  (command line interface - it works on any platform), I just do these commands to start up a container quickly. Billing is per-second. Shut it down and you stop paying. Get in, get out.

Tip: If you don't want to install anything, just go to https://shell.azure.com to get a bash shell and you can do these command there, even on a Chromebook.

I'll make a "resource group" (just a label to hold stuff, so I can delete it en masse later). Then "az container create" with the image. Note that that's a public image from Docker Hub, but I can also use a private Container Registry or a private one in Azure. More on that in a second.

Anyway, make a group (or use an existing one), create a container, and then either hit the IP I get back or I can query for (or guess) the full name. It's usually dns=name-label.location.azurecontainer.io.

> az group create --name someContainers --location westus
Location Name
---------- --------------
westus someContainers
> az container create --resource-group someContainers --name fancypantscontainer --image microsoft/aci-helloworl
d --dns-name-label fancy-container-demo --ports 80
Name ResourceGroup ProvisioningState Image IP:ports CPU/Memory OsType Location
------------------- --------------- ------------------- ------------------------ ---------------- --------------- -------- ----------
fancypantscontainer someContainers Pending microsoft/aci-helloworld 40.112.167.31:80 1.0 core/1.5 gb Linux westus
> az container show --resource-group someContainers --name fancypantscontainer --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table
FQDN ProvisioningState
--------------------------------------------- -------------------
fancy-container-demo.westus.azurecontainer.io Succeeded

Boom, container in the cloud, visible externally (if I want) and per-second billing. Since I made and named a resource group, I can delete everything in that group (and stop billing) easily:

> az group delete -g someContainers

This is cool because I can basically run Linux or Windows Containers in a "serverless" way. Meaning I don't have to think about VMs and I can get automatic, elastic scale if I like.

Azure Web Apps for Containers

ACI is great for lots of containers quickly, for bringing containers up and down, but I like my long-running web apps in Azure Web Apps for Containers. I run 19 Azure Web Apps today via things like Git/GitHub Deploy, publish from VS, or CI/CD from VSTS.

Azure Web Apps for Containers is the same idea, except I'm deploying containers directly. I can do a Single Container easily or use Docker Compose for multiple.

I wanted to show how easy it was to set this up so I did a video (cold, one take, no rehearsal, real accounts, real app) and put it on YouTube. It explains "How to Deploy Containers cheaply to Azure" in 21 minutes. It could have been shorter, but I also wanted to show how you can deploy from both Docker Hub (public) or from your own private Azure Container Registry.

I did all the work from the command line using Docker commands where I just pushed to my internal registry!

> docker login hanselregistry.azurecr.io
> docker build -t hanselregistry.azurecr.io/podcast .
> docker push hanselregistry.azurecr.io/podcast

Took minutes to get my podcast site running on Azure in Web Apps for Containers. And again - this is the penny pinching part - keep control of the App Service Plan (the VM underneath the App Service) and use the smallest one you can and pack the containers in tight.

Watch the video, and note when I get to the part where I add create an "App Service Plan." Again, that's the VM under a Web App/App Service. I have 19 smallish websites inside a Small (sometime a Medium, I can scale it whenever) App Service. You should be able to fit 3-4 decent sites in small ones depending on memory and CPU characteristics of the site.

Click Pricing Plan and you'll get here:

Recommend Pricing tiers have many choices

Be sure to explore the Dev/Test tab on the left as well. When you're making a non-container-based App Service you'll see F1 and D1 for Free and Shared. Both are fine for small websites, demos, hosting your github projects, etc.

Free, Shared, or Basic Infrastructure

If you back up and select Docker as the "OS"...

Windows, Linux, or Docker

Again check out Dev/Test for less demanding workloads and note B1 - Basic.

B1 is $32.74

The first B1 is free for 30 days! Good to kick the tires. Then as of the timing of this post it's US$32.74 (Check pricing for different regions and currencies) but has nearly 2 gigs of RAM. I can run several containers in there.

Just watch your memory and CPU and pack them in. Again, more money means better perf, but the original ask here was how to save money.

Low CPU and 40% memory

To sum up, ACI is great for per-second billing and spinning up n containers programmatically and getting out fast) and App Service for Containers is an easy way to deploy your Dockerized apps. Hope this helps.

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

EarTrumpet 2.0 makes Windows 10's audio subsystem even better...and it's free!

Jun 13, 2018

Description:

EarTrumpetLast week I blogged about some new audio features in Windows 10 that make switching your inputs and outputs easier, but even better, allow you to set up specific devices for specific programs. That means I can have one mic and headphones for Audition, while another for browsing, and yet another set for Skype.

However, while doing my research and talking about this on Twitter, lots of people started recommending I check out "EarTrumpet" - it's an applet that lets you control the volume of classic and modern Windows Apps in one nice UI! Switching, volume, and more. Consider EarTrumpet a prosumer replacement for the little Volume icon down by the clock in Windows 10. You'll hide the default one and drag EarTrumpet over in its place and use it instead!

EarTrumpet

EarTrumpet is available for free in the Windows Store and works on all versions of Windows 10, even S! I have no affiliation with the team that built it and it's a free app, so you have literally nothing to lose by trying it out!

EarTrumpet is also open source and on GitHub. The team that built it is:

Rafael Rivera - a software forward/reverse engineer David Golden - lead engineer on MetroTwit, the greatest WPF Twitter Client the world has never known. Dave Amenta - ex-Microsoft, worked on shell and Start menu for Windows 8 and 10

It was originally built as a replacement for the Volume Control in Windows back in 2015, but EarTrumpet 2.0's recent release makes it easy to use the new audio capabilities in the Windows 10's April 2018 Update.

Looks Good

It's easy to make a crappy Windows App. Heck, it's easy to make a crappy app. But EarTrumpet is NOT just an "applet" or an app. It's a perfect example of how a Windows 10 app - not made by Microsoft - can work and look seamlessly with the operating system. You'll think it's native - and it adds functionality that probably should be built in to Windows!

It's got light/dark theme support (no one bothers to even test this, but EarTrumpet does) and a nice acrylic blur. It looks like it's built-in/in-box. There's a sample app so you can make your apps look this sharp up on Rafael's GitHub and here's the actual BlurWindowExtensions that EarTrumpet uses.

Works Good

Quickly switch outputEarTrumpet 1.x works on Windows "RS3 and below" so that's 10.0.16299 and down. But 2.0 works on the latest Windows and is also written entirely in C#. Any remaining C++ code has been removed with no missing functionality.

EarTrumpet may SEEM like a simple app but there's a lot going on to be this polished AND work with any combination of audio hardware. As a podcaster and remote workers I have a LOT of audio devices but I now have one-click control over it all.

Given how fast Windows 10 has been moving with Insiders Builds and all, it seems like there's a bunch of APIs with new functionality that lacks docs. The EarTrumpet team has reverse engineered the parts the needed.

Modern Resource Technology (MRT) Resource Manager Used to pull resources out of AppX apps on the local machine without manually trying to crack open PRI files, etc. Background: https://docs.microsoft.com/en-us/windows/uwp/app-resources/using-mrt-for-converted-desktop-apps-and-games and Code here. They're investigating moving off to now-available Windows.ApplicationModel.Resources.* API Internal Audio Interface: IAudioPolicyConfigFactory Gets them access to new APIs (GetPersistedDefaultAudioEndpoint / SetPersistedDefaultAudioEndpoint) in RS4 that let's them 'redirect' apps to different playback devices. Same API used in modern sound settings. Code here with no public API yet?

Internal Audio Interface: IPolicyConfig Gets them access to SetDefaultEndpoint API; lets us change the default playback device Code here and no public API yet?

Acrylic Blur (win32) They want to match shell UI/UX so need to look and feel the same Blog post: https://withinrafael.com/2018/02/01/adding-acrylic-blur-to-your-windows-10-apps-redstone-4-desktop-apps/ Code here and they're evaluating potential replacements with some composition APIs now in RS4

From a development/devops perspective, I am told EarTrumpet's team is able to push a beta flight through the Windows 10 Store in just over 30 minutes. No waiting for days to get beta test data. They use Bugsnag for their generous OSS license to catch crashes and telemetry. So far they're getting >3000 new users a month as the word gets out with nearly 100k users so far! Hopefully +1 as you give EarTrumpet a try yourself!

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

ASP.NET Core Architect David Fowler's hidden gems in 2.1

Jun 11, 2018

Description:

ASP.NET Architect David FowlerOpen source ASP.NET Core 2.1 is out, and Architect David Fowler took to twitter to share some hidden gems that not everyone knows about. Sure, it's faster, builds faster, runs faster, but there's a number of details and fun advanced techniques that are worth a closer look at.

.NET Generic Host

ASP.NET Core introduced a new hosting model. .NET apps configure and launch a host.

The host is responsible for app startup and lifetime management. The goal of the Generic Host is to decouple the HTTP pipeline from the Web Host API to enable a wider array of host scenarios. Messaging, background tasks, and other non-HTTP workloads based on the Generic Host benefit from cross-cutting capabilities, such as configuration, dependency injection (DI), and logging.

This means that there's not just a WebHost anymore, there's a Generic Host for non-web-hosting scenarios. You get the same feeling as with ASP.NET Core and all the cool features like DI, logging, and config. The sample code for a Generic Host is up on GitHub.

IHostedService

A way to run long running background operations in both the generic host and in your web hosted applications. ASP.NET Core 2.1 added support for a BackgroundService base class that makes it trivial to write a long running async loop. The sample code for a Hosted Service is also up on GitHub.

Check out a simple Timed Background Task:

public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Timed Background Service is starting.");

_timer = new Timer(DoWork, null, TimeSpan.Zero,
TimeSpan.FromSeconds(5));

return Task.CompletedTask;
}

Fun!

Windows Services on .NET Core

You can now host ASP.NET Core inside a Windows Service! Lots of people have been asking for this. Again, no need for IIS, and you can host whatever makes you happy. Check out Microsoft.AspNetCore.Hosting.WindowsServices on NuGet and extensive docs on how to host your own ASP.NET Core app without IIS on Windows as a Windows Service.

public static void Main(string[] args)
{
var pathToExe = Process.GetCurrentProcess().MainModule.FileName;
var pathToContentRoot = Path.GetDirectoryName(pathToExe);

var host = WebHost.CreateDefaultBuilder(args)
.UseContentRoot(pathToContentRoot)
.UseStartup<Startup>()
.Build();

host.RunAsService();
} IHostingStartup - Configure IWebHostBuilder with an Assembly Attribute

Simple and clean with source on GitHub as always.

[assembly: HostingStartup(typeof(SampleStartups.StartupInjection))] Shared Source Packages

This is an interesting one you should definitely take a moment and pay attention to. It's possible to build packages that are used as helpers to share source code. We internally call these "shared source packages." These are used all over ASP.NET Core for things that should be shared BUT shouldn't be public APIs. These get used but won't end up as actual dependencies of your resulting package.

They are consumed like this in a CSPROJ. Notice the PrivateAssets attribute.

<PackageReference Include="Microsoft.Extensions.ClosedGenericMatcher.Sources" PrivateAssets="All" Version="" />
<PackageReference Include="Microsoft.Extensions.ObjectMethodExecutor.Sources" PrivateAssets="All" Version="" /> ObjectMethodExecutor

If you ever need to invoke a method on a type via reflection and that method could be async, we have a helper that we use everywhere in the ASP.NET Core code base that is highly optimized and flexible called the ObjectMethodExecutor.

The team uses this code in MVC to invoke your controller methods. They use this code in SignalR to invoke your hub methods. It handles async and sync methods. It also handles custom awaitables and F# async workflows

SuppressStatusMessages

A small and commonly requested one. If you hate the output that dotnet run gives when you host a web application (printing out the binding information) you can use the new SuppressStatusMessages extension method.

WebHost.CreateDefaultBuilder(args)
.SuppressStatusMessages(true)
.UseStartup<Startup>(); AddOptions

They made it easier in 2.1 to configure options that require services. Previously, you would have had to create a type that derived from IConfigureOptions<TOptions>, now you can do it all in ConfigureServices via AddOptions<TOptions>

public void ConfigureServicdes(IServiceCollection services)
{
services.AddOptions<MyOptions>()
.Configure<IHostingEnvironment>((o,env) =>
{
o.Path = env.WebRootPath;
});
} IHttpContext via AddHttpContextAccessor

You likely shouldn't be digging around for IHttpContext, but lots of folks ask how to get to it and some feel it should be automatic. It's not registered by default since having it has a performance cost. However, in ASP.NET Core 2.1 a PR was put in for an extension method that makes it easy IF you want it.

services.AddHttpContextAccessor();

So ASP.NET Core 2.1 is out and ready to go

New features in this release include: SignalR – Add real-time web capabilities to your ASP.NET Core apps. Razor class libraries – Use Razor to build views and pages into reusable class libraries. Identity UI library & scaffolding – Add identity to any app and customize it to meet your needs. HTTPS – Enabled by default and easy to configure in production. Template additions to help meet some GDPR requirements – Give users control over their personal data and handle cookie consent. MVC functional test infrastructure – Write functional tests for your app in-memory. [ApiController], ActionResult<T> – Build clean and descriptive web APIs. IHttpClientFactory – HttpClient client as a service that you can centrally manage and configure. Kestrel on Sockets – Managed sockets replace libuv as Kestrel's default transport. Generic host builder – Generic host infrastructure decoupled from HTTP with support for DI, configuration, and logging. Updated SPA templates – Angular, React, and React + Redux templates have been updated to use the standard project structures and build systems for each framework (Angular CLI and create-react-app).

Check out What's New in ASP.NET Core 2.1 in the ASP.NET Core docs to learn more about these features. For a complete list of all the changes in this release, see the release notes.

Go give it a try. Follow this QuickStart and you can have a basic Web App up in 10 minutes.

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Carriage Returns and Line Feeds will ultimately bite you - Some Git Tips

Jun 5, 2018

Description:

Typewriter by Matunos used under Creative CommonsWhat's a Carriage and why is it Returning? Carriage Return Line Feed WHAT DOES IT ALL MEAN!?!

The paper on a typewriter rides horizontally on a carriage. The Carriage Return or CR was a non-printable control character that would reset the typewriter to the beginning of the line of text.

However, a Carriage Return moves the carriage back but doesn't advance the paper by one line. The carriage moves on the X axes...

And Line Feed or LF is the non-printable control character that turns the Platen (the main rubber cylinder) by one line.

Hence, Carriage Return and Line Feed. Two actions, and for years, two control characters.

Every operating system seems to encode an EOL (end of line) differently. Operating systems in the late 70s all used CR LF together literally because they were interfacing with typewriters/printers on the daily.

Windows uses CRLF because DOS used CRLF because CP/M used CRLF because history.

Mac OS used CR for years until OS X switched to LF.

Unix used just a single LF over CRLF and has since the beginning, likely because systems like Multics started using just LF around 1965. Saving a single byte EVERY LINE was a huge deal for both storage and transmission.

Fast-forward to 2018 and it's maybe time for Windows to also switch to just using LF as the EOL character for Text Files.

Why? For starters, Microsoft finally updated Notepad to handle text files that use LF.

BUT

Would such a change be possible? Likely not, it would break the world. Here's NewLine on .NET Core.

public static String NewLine { get { Contract.Ensures(Contract.Result() != null); #if !PLATFORM_UNIX return "\r\n"; #else return "\n"; #endif // !PLATFORM_UNIX } }

Regardless, if you regularly use Windows and WSL (Linux on Windows) and Linux together, you'll want to be conscious and aware of CRLF and LF.

I ran into an interesting situation recently. First, let's review what Git does

You can configure .gitattributes to tell Git how to to treat files, either individually or by extension.

When

git config --global core.autocrlf true

is set, git will automatically convert files quietly so that they are checked out in an OS-specific way. If you're on Linux and checkout, you'll get LF, if you're on Windows you'll get CRLF.

Viola on Twitter offers an important clarification:

"gitattributes controls line ending behaviour for a repo, git config (especially with --global) is a per user setting."

99% of the time system and the options available works great.

Except when you are sharing file systems between Linux and Windows. I use Windows 10 and Ubuntu (via WSL) and keep stuff in /mnt/c/github.

However, if I pull from Windows 10 I get CRLF and if I pull from Linux I can LF so then my shell scripts MAY OR MAY NOT WORK while in Ubuntu.

I've chosen to create a .gitattributes file that set both shell scripts and PowerShell scripts to LF. This way those scripts can be used and shared and RUN between systems.

*.sh eol=lf *.ps1 eol=lf

You've got lots of choices. Again 99% of the time autocrlf is the right thing.

From the GitHub docs:

You'll notice that files are matched--*.c, *.sln, *.png--, separated by a space, then given a setting--text, text eol=crlf, binary. We'll go over some possible settings below. text=auto Git will handle the files in whatever way it thinks is best. This is a good default option. text eol=crlf Git will always convert line endings to CRLF on checkout. You should use this for files that must keep CRLF endings, even on OSX or Linux. text eol=lf Git will always convert line endings to LF on checkout. You should use this for files that must keep LF endings, even on Windows. binary Git will understand that the files specified are not text, and it should not try to change them. The binary setting is also an alias for -text -diff.

Again, the defaults are probably correct. BUT - if you're doing weird stuff, sharing files or file systems across operating systems then you should be aware.

Edward Thomson, a co-maintainer of libgit2, has this to say and points us to his blog post on Line Endings.

I would say this more strongly. Because `core.autocrlf` is configured in a scope that's per-user, but affects the way the whole repository works, `.gitattributes` should _always_ be used.

If you're having trouble, it's probably line endings. Edward's recommendation is that ALL projects check in a .gitattributes.

The key to dealing with line endings is to make sure your configuration is committed to the repository, using .gitattributes. For most people, this is as simple as creating a file named .gitattributes at the root of your repository that contains one line:
* text=auto

Hope this helps!

I hope Microsoft bought Github so they can fix this CRLF vs LF issue.

— Scott Hanselman (@shanselman) June 4, 2018

* Typewriter by Matunos used under Creative Commons

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Automatically change your Audio Input, Output and Volume per application in Windows 10

May 31, 2018

Description:

I recently blogged about an amazing little utility called AudioSwitcher that makes it two-clicks easy to switch your audio inputs and outputs. I need to switch audio devices a lot as I'm either watching video, doing a podcast, doing a conference call, playing a game, etc. That's at least three different "scenarios" for my audio setup. I've got 5 inputs and 5 outputs and I've seen PC audiophiles with even more.

I set up this AudioSwitcher and figured, cool, solved that silly problem. Then I got "EarTrumpet" - it's an applet that lets you control the volume of classic and modern Windows Apps in one nice UI! Switching, volume, and more. Very "prosumer," which is me, so I dig it.

A little birdie said that I should also look closer at Windows 10 itself. What? I know this OS like the back of my hand! Nonsense!

Hit the Start Menu and search for either "Sound Mixer" or "App Volume"

Sound mixer options

There's a page that does double duty called App Volume and Device Preferences.

You can also get to it from the regular Settings | Audio page:

change the device or app volume

See where it says "Change the device or app volume?" Ok, now DRINK THIS IN.

You can set the volume in active apps on an app-by-app basis. Cool. NOT IMPRESSED ARE YOU? Of course not, because while that's a lovely feature it's not the hidden power I'm talking about.

You can set the Preferred Input and Output device on an App by App Basis.

App Volume and Device Preferences

You can set the Preferred Input and Output device on an App by App Basis.

Read that again. I'll wait.

Rather than me constantly using the Audio Switcher (lovely as it is) I'll just set my ins and outs for each app.

The only catch is that this list only shows the apps that are currently using the mic/speaker, so if you want to get a nice setup, you'll want to run apps in order to change the settings for your app.

Here I've got the system sounds running through Default (usually the main speakers and the default mic is a webcam) The Speech Runtime (I use WIN+H to use Windows 10 built-in Dragon-Naturally-Style-But-Not free dictation in any app) uses the Webcam mic explicitly as it has the best recognition in my experience. Skype for Business is now using the phone. You can certainly set these things in the apps themselves, but in my experience Skype for Business doesn't care about your feelings or your audio settings. ;) I record my podcast with Zencastr so I've setup Chrome for my preferred/optimal settings.

I can still use the AudioSwitcher but now my defaults are contextual so I'm switching a LOT LESS.

Be sure to pick up "EarTrumpet" for even more advanced options!

What do you think? Did YOU know this existed?

Sponsor: Learn how .NET in 2018 addresses the challenges developers are working on with future-focused technology. Get the new whitepaper on "The State of .NET in 2018" by the Progress Telerik team!


© 2018 Scott Hanselman. All rights reserved.
     

Securing an Azure App Service Website under SSL in minutes with Let's Encrypt

May 29, 2018

Description:

A screenshot that says "Your connection to this site is not secure."Let’s Encrypt is a free, automated, and open Certificate Authority. That means you can get free SSL certs and change your sites from http:// to https://. What's the catch? The SSL Certificates only last 90 days - not a year or years. They do this to encourage automation. If you set this up, you'll want to have some scripts or background process to automatically renew and install the certificates.

I run nearly two dozen websites (some small, some significant) on Azure. Given that Chrome 68+ is going to call out non HTTPS sites explicitly as "Not secure" in July, now's as good a time as any for us to get our sites - large and small - encrypted. I have some small static "brochure-ware" sites like http://babysmash.com that just aren't worth the money for a cert. Now it's free, so let's do it.

In some theorectical future, I hope that Azure and Clouds like it will have a single "encrypt it" button and handle the details for us, but as of the date of this blog post, there's some manual initial setup and some great work from the community.

There's a protocol for getting certificates called "ACME" - Automated Certificate Management Environment - and the EFF has a tool called Certbot that helps you request and deploy certs. There is a whole ecosystem around it, and if you are running Windows/IIS you can use a great simple ACME client called "Win-ACME." There are also UI's like Certify SSL Manager and PowerShell commands for ACME and systems like "GetSSL - Azure Automation," so you can feel free to roll your own script in an afternoon. Again, if you have a Windows VM and IIS, it's pretty straightforward and getting easier every day.

I'm currently using Simon J.K. Pedersen's lovely (and volunteer and unsupported, so be nice) Azure Let's Encrypt Web App Site Extension. I followed the instructions here but hit a few snags and a few things that aren't totally obvious. Many kudos and thanks to Simon for his hard work on this, as he's saving us all many hours of trouble!

Securing an Azure Web App with Let's Encrypt and the (unofficial) SJKP Let's Encrypt Site Extension

I'll go and secure BabySmash.com right now. Make a text file and keep track of these few things.

What's our checklist? Azure Storage connection string - You'll need one for the extension to store state. App Service Hosting Plan and App Service Resource Group Name - Ideally your "plan" (the VM your site runs on) and your site are in the same Resource Group (a resource group is just a name for a pile of stuff) Service Principal Client/Application ID - This is like an account that the Site Extension will run as to do its job. It's an "on behalf of" delegate that will automate the changes to your site. You might see "client id" or "application id," they are the same thing. Service Principal Client Secret - You'll make a new Key in your Service Principal. I called mine "login" but it doesn't matter, then some value like a generated password (also doesn't matter) and then hit Save. You'll then get a long hashed value - THAT is your Client Secret. Save it, you'll never see it again and you can't get it back.

Cool. Let's do it. Again, following along with the wiki, I'll make an App under Active Directory | App Registrations in the Azure Portal at https://portal.azure.com

Add a new App Registration in the Azure Portal

Make a new app...

Creating a new App Registration

Now grab the Application ID, aka Client ID and save that in your scratch space/notepad/sticky note/smart brain/don't lose it.

Copying the App Registration ClientID

Now click Settings, Keys, make a new one called "login" with a password and click Save. COPY THAT VALUE. You'll never see it again.

Adding a Key to the App Registration

Now, go to the Resource Group for your App Service and App Service Plan. Ideally it'll be the same one, but if it's not, go to each one and keep track of the names. I went there with the search box at the top of the Azure Portal.

Going to the Resource Group

The Portal changes sometimes, and this next step didn't line up to the Wiki instructions exactly. Click add, then make your new App Registration from above a "Contributor" to your Resource Group.

Adding the App as a Contributor to the Resource Group

Now head over to your actual App Service, and click Extensions.

App Service Extentions

I picked Azure Let's Encrypt to have this run as a Web Job in the background.

Adding the Let's Encrypt App Service Extension

Now, while you're at your Web App/Site, go to Settings and make sure you've set the following two Connection strings AzureWebJobsDashboard and AzureWebJobsStorage - Don't forget this step or it'll all work once but fail in 3 months during the renewal.

Both of these should be set to your Azure Storage Account connection string, e.g. DefaultEndpointsProtocol=https;AccountName=[myaccount];AccountKey=[mykey];

Remember the Web Job needs this storage so it can renew the certs every 3 months. Add them as "Custom."

Connection Strings in App Settings

Next, the instructions say to "configure the Site Extension." That can be confusing until you realize a Site Extension is really a "Side car web site." It is its own little website, running off to the side of your site. It will be at http://YOURSITENAME.scm.azurewebsites.net/LetsEncrypt so mine is at http://babysmash.scm.azurewebsites.net/LetsEncrypt.

You'll then want to full this form out. Your "Tenant ID" is your Azure Active Directory URL. You'll find your SubscriptionId in the "Overview" tab.

Configuring the Let's Encrypt Extension

Next next, and then hold down CTRL (as this is a multi-selection dialog) and pick the sites you want a certificate for. Note that www.yourdomin and and .yourdomain (the naked domain) are two different certs.

Requesting two SSL Certs

You'll want to confirm you see "Certificate successfully installed."

Certificate successfully installed.

Then head back over to the Azure Portal and turn on HTTPS Only if you'd like Azure itself (versus your code) to ensure and redirect all non-secure links to https://. Also confirm your SSL Bindings are correct. They should have been set up automatically.

HTTPS Only in the Azure Portal

Now I'll go hit https://babysmash.com and...

A screenshot that says "Your connection to this site is not secure."

It's not secure! Ah, now my site is in "mixed mode." That means that some of the resources like gifs or css were fetched with non-ssl (HTTP://) links. I'll update my site and all its external resources like YouTube embeds and fonts with https:// so that everything is secure. Since I'm using Git Deploy with Azure Web Apps (Azure App Service) I'll just make the changes and push the site again. You can also look at the elements as they load in F12 Browser Tools if you are having trouble finding out which image, css, or js file came in over http://

I'll redeploy and after a few tries, boom.

https://www.babysmash.com

And there's the cert. Note its expiration date. If the Site Extension does its job it will renew the cert before it expires!

A Let's Encrypt SSL Cert

Once I knew what I was doing, it took about 10 minutes per site. Thanks Simon for your work, and while there are multiple ways to do this, I found Simon's App Service Extension the easiest. I hope the Azure team comes up with a "One Click Solution" to this.

What do you think?

Sponsor: Learn how .NET in 2018 addresses the challenges developers are working on with future-focused technology. Get the new whitepaper on "The State of .NET in 2018" by the Progress Telerik team!


© 2018 Scott Hanselman. All rights reserved.
     

The year of Linux on the (Windows) Desktop - WSL Tips and Tricks

May 25, 2018

Description:

I've been doing a ton of work in bash/zsh/fish lately - Linuxing. In case you didn't know, Windows 10 can run Linux now. Sure, you can run Linux in a VM, but it's heavy and you need a decent machine. You can run a shell under Docker, but you'll need Hyper-V and Windows 10 Pro. You can even go to https://shell.azure.com and get a terminal anywhere - I do this on my Chromebook.

But mostly I run Linux natively on Windows 10. You can go. Just open PowerShell once, as Administrator and run this command and reboot:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Then head over to the Windows Store and download Ubuntu, or Debian, or Kali, or whatever.

Ubuntu OpenSUSE SLES Kali Linux Debian GNU/Linux

What's happening is you're running user-mode Linux without the Linux Kernel. The syscalls (system calls) that these un-modified Linuxes use are brokered over to Windows. Fork a Linux process? It a pico-process in Windows and shows up in the task manager.

Want to edit Windows files and edit them both in Windows and in Linux? Keep your files/code in /mnt/c/ and you can edit them with other OS. Don't use Windows to "reach into the Linux file system." There be dragons.

image

Once you've got a Linux installed (or many, as I do) you can manage then and use them in a number of ways.

Think this is stupid or foolish? Stop reading and keep running Linux and I wish you all the best. More power to you.

Want to know more? Want to look new and creative ways you can get the BEST of the Windows UI and Linux command line tools? Read on, friends.

wslconfig

WSL means "Windows Subsystem for Linux." Starting with the Windows 10 (version 1709 - that's 2017-09, the Fall Creators Update. Run "Winver" to see what you're running), you've got a command called "wslconfig." Try it out. It lists distros you have and controls which one starts when you type "bash."

Check out below that my default for "bash"  is Ubuntu 16.04, but I can run 18.04 manually if I like. See how I move from cmd into bash and exit out, then go back in, seamlessly. Again, no VM.

C:\>wslconfig /l /all
Windows Subsystem for Linux Distributions:
Ubuntu (Default)
Ubuntu-18.04
openSUSE-42
Debian
kali-rolling

C:\>wslconfig /l
Windows Subsystem for Linux Distributions:
Ubuntu (Default)
Ubuntu-18.04
openSUSE-42
Debian
kali-rolling

C:\>bash
128 → $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.4 LTS
Release: 16.04
Codename: xenial
128 → $ exit
logout

C:\>ubuntu1804
scott@SONOFHEXPOWER:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic
scott@SONOFHEXPOWER:~$

You can also pipe things into Linux commands by piping to wsl or bash like this:

C:\Users\scott\Desktop>dir | wsl grep "poop"
05/18/2018 04:23 PM <DIR> poop

If you're in Windows, running cmd.exe or powershell.exe, it's best to move into Linux by running wsl or bash as it keeps the current directory.

C:\Users\scott\Desktop>bash
129 → $ pwd
/mnt/c/Users/scott/Desktop
129 → $ exit
logout

Cool! Wondering what that number is before my Prompt? That's my blood sugar. But that's another blog post.

wsl.conf

There's a file in /etc/wsl.conf that lets you control things like if your Linux of choice automounts your Windows drives. You can also control more advanced things like if Windows autogenerates a hosts file or processes /etc/fstab. It's up to you!

Distros

There's a half dozen distros available and more coming I'm told, but YOU can also make/package your own Linux distribution for WSL with packager/distro-launcher that's open sourced at GitHub.

Docker and WSL

Everyone wants to know if you can run Docker "natively" on WSL. No, that's a little too "Inception," and as mentioned, the Linux Kernel is not present. The unmodified elf binaries work fine but Windows does the work. BUT!

You can run Docker for Windows and click "Expose daemon on localhost:2375" and since Windows and WSL/Linux share the same port space, you CAN run the Docker client very happily on WSL.

After you've got Docker for Windows running in the background, install it in Ubuntu following the regular instructions. Then update your .bashrc to force your local docker client to talk to Docker for Windows:

echo "export DOCKER_HOST=tcp://0.0.0.0:2375" >> ~/.bashrc && source ~/.bashrc

There's lots of much longer and more details "Docker on WSL" tutorials, so if you'd like more technical detail, I'd encourage you to check them out! If you use a lot of Volume Mounts, I found Nick's write-up very useful.

Now when I run "docker images" or whatever from WSL I'm talking to Docker for Windows. Works great, exactly as you'd expect and you're sharing images and containers in both worlds.

128 → $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
podcast test 1bd29d0223da 9 days ago 2.07GB
podcast latest e9dd366f0375 9 days ago 271MB
microsoft/dotnet-samples aspnetapp 80a65a6b6f95 11 days ago 258MB
microsoft/dotnet-samples dotnetapp b3d7f438bad3 2 weeks ago 180MB
microsoft/dotnet 2.1-sdk 1f63052e44c2 2 weeks ago 1.72GB
microsoft/dotnet 2.1-aspnetcore-runtime 083ca6a642ea 2 weeks ago 255MB
microsoft/dotnet 2.1-runtime 6d25f57ea9d6 2 weeks ago 180MB
microsoft/powershell latest 708fb186511e 2 weeks ago 318MB
microsoft/azure-cli latest 92bbcaff2f87 3 weeks ago 423MB
debian jessie 4eb8376dc2a3 4 weeks ago 127MB
microsoft/dotnet-samples latest 4070d1d1e7bb 5 weeks ago 219MB
docker4w/nsenter-dockerd latest cae870735e91 7 months ago 187kB
glennc/fancypants latest e1c29c74e891 20 months ago 291MB

Fabulous.

Coding and Editing Files

I need to hit this point again. Do not change Linux files using Windows apps and tools. However, you CAN share files and edit them with both Windows and Linux by keeping code on the Windows filesystem.

For example, my work is at c:\github so it's also at /mnt/c/github. I use Visual Studio code and edit my code there (or vim, from within WSL) and I run the code from Linux. I can even run bash/wsl from within Visual Studio Code using its integrated terminal. Just hit "Ctrl+P" in Visual Studio Code and type "Select Default Shell."

Select Default Shell in Visual Studio Code

On Windows 10 Insiders edition, Windows now has a UI called "Sets" that will give you Tabbed Command Prompts. Here I am installing Ruby on Rails in Ubuntu next to two other prompts - Cmd and PowerShell. This is all default Windows - no add-ons or extra programs for this experience.

Tabbed Command Prompts

I'm using Rails as an example here because Ruby/Rails support on Windows with native extensions has historically been a challenge. There's been a group of people heroically (and thanklessly) trying to get Ruby on Rails working well on Windows, but today there is no need. It runs great on Linux under Windows.

I can also run Windows apps or tools from Linux as long as I use their full name with extension (like code.exe) or set an alias.

Here I've made an alias "code" that runs code in the current directory, then I've got VS Code running editing my new Rails app.

Editing a Rails app on Linux on Windows 10 with VS Code

I can even mix and match Windows and Linux when piping. This will likely make Windows people happy and deeply offend Linux people. Or, if you're non-denominational like me, you'll dig it!

$ ipconfig.exe | grep IPv4 | cut -d: -f2
172.21.240.1
10.159.21.24

Again a reminder: Modifying files located not under /mnt/<x> with a Windows application in WSL is not supported. But edit stuff on /mnt/x with whatever and you're cool.

Sharing Sharing Sharing

If you have Windows 10 Build 17064 or newer (run ver from windows or "cmd.exe /c /ver" from Linux) and you can even share an environment variable!

131 → $ cmd.exe /c ver

Microsoft Windows [Version 10.0.17672.1000]

There's a special environment variable called "WSLENV" that is a colon-delimited list of environment variables that should be included when launching WSL processes from Win32 or Win32 processes from WSL. Basically you give it a list of variables you want to roam/share. This will make it easy for things like cross-platform dual builds. You can even add a /p flag and it'll automatically translate paths between c:\windows style and /mnt/c/windows style.

Check out the example at the WSL Blog about how to share a GOPATH and use VSCode in Windows and run Go in both places.

You can also use a special built-in command line called "wslpath" to translate path names between Windows and WSL. This is useful if you're sharing bash scripts, doing cross-platform scripts (I have PowerShell Core scripts that run in both places) or just need to programmatically switch path types.

131 → $ wslpath "d:\github\hanselminutes-core"
/mnt/d/github/hanselminutes-core
131 → $ wslpath "c:\Users\scott\Desktop"
/mnt/c/Users/scott/Desktop

There is no man page for wslpath yet, but copied from this GitHub issue, here's the gist:

wslpath usage:
-a force result to absolute path format
-u translate from a Windows path to a WSL path (default)
-w translate from a WSL path to a Windows path
-m translate from a WSL path to a Windows path, with ‘/’ instead of ‘\\’

One final note, once you've installed a Linux distro from the Windows Store, it's on you to keep it up to date. The Windows Store won't run "apt upgrade" or ever touch your Linuxes once they have been installed. Additionally, you can have Ubuntu 1604 and 1804 installed side-by-side and it won't hurt anything.

Related Links Setting up a Shiny Development Environment within Linux on Windows 10 Badass Terminal: WSL, macOS, and Ubuntu dotfiles update!!! by Jessica Deen

Are you using WSL?

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Real Browser Integration Testing with Selenium Standalone, Chrome, and ASP.NET Core 2.1

May 23, 2018

Description:

I find your lack of tests disturbingBuckle up kids, this is nuts and I'm probably doing it wrong. ;) And it's 2am and I wrote this fast. I'll come back tomorrow and fix the spelling.

I want to have lots of tests to make sure my new podcast site is working well. As mentioned before, I've been updating the site to ASP.NET Core 2.1.

Here's some posts if you want to catch up: Eyes wide open - Correct Caching is always hard The Programmer's Hindsight - Caching with HttpClientFactory and Polly Part 2 Adding Cross-Cutting Memory Caching to an HttpClientFactory in ASP.NET Core with Polly Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly HttpClientFactory for typed HttpClient instances in ASP.NET Core 2.1 Updating jQuery-based Lazy Image Loading to IntersectionObserver Automatic Unit Testing in .NET Core plus Code Coverage in Visual Studio Code Setting up Application Insights took 10 minutes. It created two days of work for me. Upgrading my podcast site to ASP.NET Core 2.1 in Azure plus some Best Practices Using LazyCache for clean and simple .NET Core in-memory caching

I've been doing my testing with XUnit and I want to test in layers.

Basic Unit Testing

Simply create a Razor Page's Model in memory and call OnGet or WhateverMethod. At this point you are NOT calling Http, there is no WebServer.

public IndexModel pageModel;

public IndexPageTests()
{
var testShowDb = new TestShowDatabase();
pageModel = new IndexModel(testShowDb);
}

[Fact]
public async void MainPageTest()
{
// FAKE HTTP GET "/"
IActionResult result = await pageModel.OnGetAsync(null, null);

Assert.NotNull(result);
Assert.True(pageModel.OnHomePage); //we are on the home page, because "/"
Assert.Equal(16, pageModel.Shows.Count()); //home page has 16 shows showing
Assert.Equal(620, pageModel.LastShow.ShowNumber); //last test show is #620
}

Moving out a layer...

In-Memory Testing with both Client and Server using WebApplicationFactory

Here we are starting up the app and calling it with a client, but the "HTTP" of it all is happening in memory/in process. There are no open ports, there's no localhost:5000. We can still test HTTP semantics though.

public class TestingFunctionalTests : IClassFixture<WebApplicationFactory<Startup>>
{
public HttpClient Client { get; }
public ServerFactory<Startup> Server { get; }

public TestingFunctionalTests(ServerFactory<Startup> server)
{
Client = server.CreateClient();
Server = server;
}

[Fact]
public async Task GetHomePage()
{
// Arrange & Act
var response = await Client.GetAsync("/");

// Assert
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}
...
} Testing with a real Browser and real HTTP using Selenium Standalone and Chrome

THIS is where it gets interesting with ASP.NET Core 2.1 as we are going to fire up both the complete web app, talking to the real back end (although it could talk to a local test DB if you want) as well as a real headless version of Chrome being managed by Selenium Standalone and talked to with the WebDriver. It sounds complex, but it's actually awesome and super useful.

First I add references to Selenium.Support and Selenium.WebDriver to my Test project:

dotnet add reference "Selenium.Support"
dotnet add reference "Selenium.WebDriver"

Make sure you have node and npm then you can get Selenium Standalone like this:

npm install -g selenium-standalone@latest
selenium-standalone install

Chrome is being controlled by automated test softwareSelenium, to be clear, puts your browser on a puppet's strings. Even Chrome knows it's being controlled! It's using the (soon to be standard, but clearly defacto standard) WebDriver protocol. Imagine if your browser had a localhost REST protocol where you could interrogate it and click stuff! I've been using Selenium for over 11 years. You can even test actual Windows apps (not in the browser) with WinAppDriver/Appium but that's for another post.

Now for this part, bear with me because my ServerFactory class I'm about to make is doing two things. It's setting up my ASP.NET Core 2. 1 app and actually running it so it's listening on https://localhost:5001. It's assuming a few things that I'll point out. It also (perhaps questionable) is launching Selenium Standalone from within its constructor. Questionable, to be clear, and there's others ways to do this, but this is VERY simple.

If it offends you, remembering that you do need to start Selenium Standalone with "selenium-standalone start" you could do it OUTSIDE your test in a script.

Perhaps do the startup/teardown work in a PowerShell or Shell script. Start it up, save the process id, then stop it when you're done. Note I'm also doing checking code coverage here with Coverlet but that's not related to Selenium - I could just "dotnet test."

#!/usr/local/bin/powershell
$SeleniumProcess = Start-Process "selenium-standalone" -ArgumentList "start" -PassThru
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests
Stop-Process -Id $SeleniumProcess.Id

Here my SeleniumServerFactory is getting my Browser and Server ready.

SIDEBAR NOTE: I want to point out that this is NOT perfect and it's literally the simplest thing possible to get things working. It's my belief, though, that there are some problems here and that I shouldn't have to fake out the "new TestServer" in CreateServer there. While the new WebApplicationFactory is great for in-memory unit testing, it should be just as easy to fire up your app and use a real port for things like Selenium testing. Here I'm building and starting the IWebHostBuilder myself (!) and then making a fake TestServer only to satisfy the CreateServer method, which I think should not have a concrete class return type. For testing, ideally I could easily get either an "InMemoryWebApplicationFactory" and a "PortUsingWebApplicationFactory" (naming is hard). Hopefully this is somewhat clear and something that can be easily adjusted for ASP.NET Core 2.1.x.

My app is configured to listen on both http://localhost:5000 and https://localhost:5001, so you'll note where I'm getting that last value (in an attempt to avoid hard-coding it). We also are sure to stop both Server and Brower in Dispose() at the bottom.

public class SeleniumServerFactory<TStartup> : WebApplicationFactory<Startup> where TStartup : class
{
public string RootUri { get; set; } //Save this use by tests

Process _process;
IWebHost _host;

public SeleniumServerFactory()
{
ClientOptions.BaseAddress = new Uri("https://localhost"); //will follow redirects by default

_process = new Process() {
StartInfo = new ProcessStartInfo {
FileName = "selenium-standalone",
Arguments = "start",
UseShellExecute = true
}
};
_process.Start();
}

protected override TestServer CreateServer(IWebHostBuilder builder)
{
//Real TCP port
_host = builder.Build();
_host.Start();
RootUri = _host.ServerFeatures.Get<IServerAddressesFeature>().Addresses.LastOrDefault(); //Last is https://localhost:5001!

//Fake Server we won't use...this is lame. Should be cleaner, or a utility class
return new TestServer(new WebHostBuilder().UseStartup<TStartup>());
}

protected override void Dispose(bool disposing)
{
        base.Dispose(disposing);
        if (disposing) {
            _host.Dispose();
_process.CloseMainWindow(); //Be sure to stop Selenium Standalone
        }
    }
}

But what does a complete series of tests look like? I have a Server, a Browser, and an (theoretically optional) HttpClient. Focus on the Browser and Server.

At the point when a single test starts, my site is up (the Server) and an invisible headless Chrome (the Browser) is actually being puppeted with local calls via WebDriver. All this is hidden from to you - if you want. You can certainly see Chrome (or other browsers) get automated, but what's nice about Selenium Standalone with hidden/headless Browser testing is that my unit tests now also include these complete Integration Tests and can run as part of my Continuous Integration Build.

Again, layers. I test classes, then move out and test Http Request/Response interactions, and finally the site is up and I'm making sure I can navigate, that data is loading. I'm automating the "smoke tests" that I used to do myself! And I can make as many of this a I'd like now that the scaffolding work is done.

public class SeleniumTests : IClassFixture<SeleniumServerFactory<Startup>>, IDisposable
{
public SeleniumServerFactory<Startup> Server { get; }
public IWebDriver Browser { get; }
public HttpClient Client { get; }
public ILogs Logs { get; }

public SeleniumTests(SeleniumServerFactory<Startup> server)
{
Server = server;
Client = server.CreateClient(); //weird side effecty thing here. This call shouldn't be required for setup, but it is.

var opts = new ChromeOptions();
opts.AddArgument("--headless"); //Optional, comment this out if you want to SEE the browser window
opts.SetLoggingPreference(OpenQA.Selenium.LogType.Browser, LogLevel.All);

var driver = new RemoteWebDriver(opts);
Browser = driver;
Logs = new RemoteLogs(driver); //TODO: Still not bringing the logs over yet
}

[Fact]
public void LoadTheMainPageAndCheckTitle()
{
Browser.Navigate().GoToUrl(Server.RootUri);
Assert.StartsWith("Hanselminutes Technology Podcast - Fresh Air and Fresh Perspectives for Developers", Browser.Title);
}

[Fact]
public void ThereIsAnH1()
{
Browser.Navigate().GoToUrl(Server.RootUri);

var headerSelector = By.TagName("h1");
Assert.Equal("HANSELMINUTES PODCAST\r\nby Scott Hanselman", Browser.FindElement(headerSelector).Text);
}

[Fact]
public void KevinScottTestThenGoHome()
{
Browser.Navigate().GoToUrl(Server.RootUri + "/631/how-do-you-become-a-cto-with-microsofts-cto-kevin-scott");

var headerSelector = By.TagName("h1");
var link = Browser.FindElement(headerSelector);
link.Click();
Assert.Equal(Browser.Url.TrimEnd('/'),Server.RootUri); //WTF
}

public void Dispose()
{
Browser.Dispose();
}
}

Here's a build, unit test/selenium test with code coverage actually running. I started running it from PowerShell. The black window in the back is Selenium Standalone doing its thing (again, could be hidden).

Two consoles, one with PowerShell running XUnit and one running Selenium

If I comment out the "--headless" line, I'll see this as Chrome is automated. Cool.

Chrome is loading my site and being automated

Of course, I can also run these in the .NET Core Test Explorer in either Visual Studio Code, or Visual Studio.

image

Great fun. What are your thoughts?

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Installing PowerShell Core on a Raspberry Pi (powered by .NET Core)

May 18, 2018

Description:

PowerShell Core on a Raspberry Pi!Earlier this week I set up .NET Core and Docker on a Raspberry Pi and found that I could run my podcast website quite easily on a Pi. Check that post out as there's a lot going on. I can test within a Linux Container and output the test results to the host and then open them in VS. I also explored a reasonably complex Dockerfile that is both multiarch and multistage. I can reliably build and test my website either inside a container or on the bare metal of Windows or Linux. Very fun.

As primarily a Windows developer I have lots of batch/cmd files like "test.bat" or "dockerbuild.bat." They start as little throwaway bits of automation but as the project grows inevitably more complex.

I'm not interested in "selling" anyone PowerShell. If you like bash, use bash, it's lovely, as are shell scripts. PowerShell is object-oriented in its pipeline, moving lists of real objects as standard output. They are different and most importantly, they can live together. Just like you might call Python scripts from bash, you can call PowerShell scripts from bash, or vice versa. Another tool in our toolkits.

PS /home/pi> Get-Process | Where-Object WorkingSet -gt 10MB

NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
0 0.00 10.92 890.87 917 917 docker-containe
0 0.00 35.64 1,140.29 449 449 dockerd
0 0.00 10.36 0.88 1272 037 light-locker
0 0.00 20.46 608.04 1245 037 lxpanel
0 0.00 69.06 32.30 3777 749 pwsh
0 0.00 31.60 107.74 647 647 Xorg
0 0.00 10.60 0.77 1279 037 zenity
0 0.00 10.52 0.77 1280 037 zenity

Bash and shell scripts are SUPER powerful. It's a whole world. But it is text based (or json for some newer things) so you're often thinking about text more.

pi@raspberrypidotnet:~ $ ps aux | sort -rn -k 5,6 | head -n6
root 449 0.5 3.8 956240 36500 ? Ssl May17 19:00 /usr/bin/dockerd -H fd://
root 917 0.4 1.1 910492 11180 ? Ssl May17 14:51 docker-containerd --config /var/run/docker/containerd/containerd.toml
root 647 0.0 3.4 155608 32360 tty7 Ssl+ May17 1:47 /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
pi 1245 0.2 2.2 153132 20952 ? Sl May17 10:08 lxpanel --profile LXDE-pi
pi 1272 0.0 1.1 145928 10612 ? Sl May17 0:00 light-locker
pi 1279 0.0 1.1 145020 10856 ? Sl May17 0:00 zenity --warning --no-wrap --text

You can take it as far as you like. For some it's intuitive power, for others, it's baroque.

pi@raspberrypidotnet:~ $ ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'
0.00 Mb COMMAND
161.14 Mb /usr/bin/dockerd -H fd://
124.20 Mb docker-containerd --config /var/run/docker/containerd/containerd.toml
78.23 Mb lxpanel --profile LXDE-pi
66.31 Mb /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
61.66 Mb light-locker

Point is, there's choice. Here's a nice article about PowerShell from the perspective of a Linux user. Can I install PowerShell on my Raspberry Pi (or any Linux machine) and use the same scripts in both places? YES.

For many years PowerShell was a Windows-only thing that was part of the closed Windows ecosystem. In fact, here's video of me nearly 12 years ago (I was working in banking) talking to Jeffrey Snover about PowerShell. Today, PowerShell is open source up at https://github.com/PowerShell with lots of docs and scripts, also open source. PowerShell is supported on Windows, Mac, and a half-dozen Linuxes. Sound familiar? That's because it's powered (ahem) by open source cross platform .NET Core. You can get PowerShell Core 6.0 here on any platform.

Don't want to install it? Start it up in Docker in seconds with

docker run -it microsoft/powershell

Sweet. How about Raspbian on my ARMv7 based Raspberry Pi? I was running Raspbian Jessie and PowerShell is supported on Raspbian Stretch (newer) so I upgraded from Jesse to Stretch (and tidied up and did the firmware while I'm at it) with:$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/raspi.list
$ sudo apt-get update && sudo apt-get upgrade -y
$ sudo apt-get dist-upgrade -y
$ sudo rpi-update

Cool. Now I'm on Raspbian Stretch on my Raspberry Pi 3. Let's install PowerShell! These are just the most basic Getting Started instructions. Check out GitHub for advanced and detailed info if you have issues with prerequisites or paths.

NOTE: Here I'm getting PowerShell Core 6.0.2. Be sure to check the releases page for newer releases if you're reading this in the future. I've also used 6.1.0 (in preview) with success. The next 6.1 preview will upgrade to .NET Core 2.1. If you're just evaluating, get the latest preview as it'll have the most recent bug fixes.

$ sudo apt-get install libunwind8
$ wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.2/powershell-6.0.2-linux-arm32.tar.gz
$ mkdir ~/powershell
$ tar -xvf ./powershell-6.0.2-linux-arm32.tar.gz -C ~/powershell
$ sudo ln -s ~/powershell/pwsh /usr/bin/pwsh
$ sudo ln -s ~/powershell/pwsh /usr/local/bin/powershell
$ powershell

Lovely.

GOTCHA: Because I upgraded from Jessie to Stretch, I ran into a bug where libssl1.0.0 is getting loaded over libssl1.0.2. This is a complex native issue with interaction between PowerShell and .NET Core 2.0 that's being fixed. Only upgraded machines like mind will it it, but it's easily fixed with sudo apt-get remove libssl1.0.0

Now this means my PowerShell build scripts can work on both Windows and Linux. This is a deeply trivial example (just one line) but note the "shebang" at the top that lets Linux know what a *.ps1 file is for. That means I can keep using bash/zsh/fish on Raspbian, but still "build.ps1" or "test.ps1" on any platform.

#!/usr/local/bin/powershell
dotnet watch --project .\hanselminutes.core.tests test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov

Here's a few totally random but lovely PowerShell examples:

PS /home/pi> Get-Date | Select-Object -Property * | ConvertTo-Json
{
"DisplayHint": 2,
"DateTime": "Sunday, May 20, 2018 5:55:35 AM",
"Date": "2018-05-20T00:00:00+00:00",
"Day": 20,
"DayOfWeek": 0,
"DayOfYear": 140,
"Hour": 5,
"Kind": 2,
"Millisecond": 502,
"Minute": 55,
"Month": 5,
"Second": 35,
"Ticks": 636623925355021162,
"TimeOfDay": {
"Ticks": 213355021162,
"Days": 0,
"Hours": 5,
"Milliseconds": 502,
"Minutes": 55,
"Seconds": 35,
"TotalDays": 0.24693868190046295,
"TotalHours": 5.9265283656111105,
"TotalMilliseconds": 21335502.1162,
"TotalMinutes": 355.59170193666665,
"TotalSeconds": 21335.502116199998
},
"Year": 2018
}

You can take PowerShell objects to and from Objects, Hashtables, JSON, etc.

PS /home/pi> $hash | ConvertTo-Json
{
"Shape": "Square",
"Color": "Blue",
"Number": 1
}
PS /home/pi> $hash = @{ Number = 1; Shape = "Square"; Color = "Blue"}
PS /home/pi> $hash

Name Value
---- -----
Shape Square
Color Blue
Number 1


PS /home/pi> $hash | ConvertTo-Json
{
"Shape": "Square",
"Color": "Blue",
"Number": 1
}

Here's a nice one from MCPMag:

PS /home/pi> $URI = "https://query.yahooapis.com/v1/public/yql?q=select * from weather.forecast where woeid in (select woeid from geo.places(1) where text='{0}, {1}')&format=json&env=store://datatables.org/alltableswithkeys" -f 'Omaha','NE'
PS /home/pi> $Data = Invoke-RestMethod -Uri $URI
PS /home/pi> $Data.query.results.channel.item.forecast|Format-Table

code date day high low text
---- ---- --- ---- --- ----
39 20 May 2018 Sun 62 56 Scattered Showers
30 21 May 2018 Mon 78 53 Partly Cloudy
30 22 May 2018 Tue 88 61 Partly Cloudy
4 23 May 2018 Wed 89 67 Thunderstorms
4 24 May 2018 Thu 91 68 Thunderstorms
4 25 May 2018 Fri 92 69 Thunderstorms
34 26 May 2018 Sat 89 68 Mostly Sunny
34 27 May 2018 Sun 85 65 Mostly Sunny
30 28 May 2018 Mon 85 63 Partly Cloudy
47 29 May 2018 Tue 82 63 Scattered Thunderstorms

Or a one-liner if you want to be obnoxious.

PS /home/pi> (Invoke-RestMethod -Uri "https://query.yahooapis.com/v1/public/yql?q=select * from weather.forecast where woeid in (select woeid from geo.places(1) where text='Omaha, NE')&format=json&env=store://datatables.org/alltableswithkeys").query.results.channel.item.forecast|Format-Table

Example: This won't work on Linux as it's using Windows specific AIPs, but if you've got PowerShell on your Windows machine, try out this one-liner for a cool demo:

iex (New-Object Net.WebClient).DownloadString("http://bit.ly/e0Mw9w")

Thoughts?

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Building, Running, and Testing .NET Core and ASP.NET Core 2.1 in Docker on a Raspberry Pi (ARM32)

May 16, 2018

Description:

I love me some Raspberry Pi. They are great little learning machines and are super fun for kids to play with. Even if those kids are adults and they build a 6 node Kubernetes Raspberry Pi Cluster.

Open source .NET Core runs basically everywhere - Windows, Mac, and a dozen Linuxes. However, there is an SDK (that compiles and builds) and a Runtime (that does the actual running of your app). In the past, the .NET Core SDK (to be clear, the ability to "dotnet build") wasn't supported on ARMv7/ARMv8 chips like the Raspberry Pi. Now it is.

.NET Core is now supported on Linux ARM32 distros, like Raspbian and Ubuntu!

Note: .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53. Folks on the Azure IoT Edge team use the .NET Core Bionic ARM32 Docker images to support developers writing C# with Edge devices.

There's two ways to run .NET Core on a Raspberry Pi.

One, use Docker. This is literally the fastest and easiest way to get .NET Core up and running on a Pi. It sounds crazy but Raspberry Pis are brilliant little Docker container capable systems. You can do it in minutes, truly. You can install Docker quickly on a Raspberry Pi with just:

curl -sSL https://get.docker.com | sh
sudo usermod -aG docker pi

After installing Docker you'll want to log in and out. You might want to try a quick sample to make sure .NET Core runs! You can explore the available Docker tags at https://hub.docker.com/r/microsoft/dotnet/tags/ and you can read about the .NET Core Docker samples here https://github.com/dotnet/dotnet-docker/tree/master/samples/dotnetapp

Now I can just docker run and then pass in "dotnet --info" to find out about dotnet on my Pi.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 2.1.300-rc1-008673
Commit: f5e3ddbe73

Runtime Environment:
OS Name: debian
OS Version: 9
OS Platform: Linux
RID: debian.9-x86
Base Path: /usr/share/dotnet/sdk/2.1.300-rc1-008673/

Host (useful for support):
Version: 2.1.0-rc1
Commit: eb9bc92051

.NET Core SDKs installed:
2.1.300-rc1-008673 [/usr/share/dotnet/sdk]

.NET Core runtimes installed:
Microsoft.NETCore.App 2.1.0-rc1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

To install additional .NET Core runtimes or SDKs:
https://aka.ms/dotnet-download

This is super cool. There I'm on the Raspberry Pi (RPi) and I just ask for the dotnet:2.1-sdk and because they are using "multiarch" docker files, Docker does the right thing and it just works. If you want to use .NET Core on ARM32 with Docker, you can use any of the following tags.

Note: The first three tags are multi-arch and bionic is Ubuntu 18.04. The codename stretch is Debian 9. So I'm using 2.1-sdk and it's working on my RPi, but I can be specific if I'd prefer. 2.1-sdk 2.1-runtime 2.1-aspnetcore-runtime 2.1-sdk-stretch-arm32v7 2.1-runtime-stretch-slim-arm32v7 2.1-aspnetcore-runtime-stretch-slim-arm32v7 2.1-sdk-bionic-arm32v7 2.1-runtime-bionic-arm32v7 2.1-aspnetcore-runtime-bionic-arm32v7

Try one in minutes like this:

docker run --rm microsoft/dotnet-samples:dotnetapp

Here it is downloading the images...

Docker on a Raspberry Pi

In previous versions of .NET Core's Dockerfiles it would fail if you were running an x64 image on ARM:

standard_init_linux.go:190: exec user process caused "exec format error"

Different processors! But with multiarch per https://github.com/dotnet/announcements/issues/14 Kendra from Microsoft it just works with 2.1.

Docker has a multi-arch feature that microsoft/dotnet-nightly recently started utilizing. The plan is to port this to the official microsoft/dotnet repo shortly. The multi-arch feature allows a single tag to be used across multiple machine configurations. Without this feature each architecture/OS/platform requires a unique tag. For example, the microsoft/dotnet:1.0-runtime tag is based on Debian and microsoft/dotnet:1.0-runtime-nanoserver if based on Nano Server. With multi-arch there will be one common microsoft/dotnet:1.0-runtime tag. If you pull that tag from a Linux container environment you will get the Debian based image whereas if you pull that tag from a Windows container environment you will get the Nano Server based image. This helps provide tag uniformity across Docker environments thus eliminating confusion.

In these examples above I can:

Run a preconfigured app within a Docker image like: docker run --rm microsoft/dotnet-samples:dotnetapp Run dotnet commands within the SDK image like: docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info Run an interactive terminal within the SDK image like: docker run --rm -it microsoft/dotnet:2.1-sdk

As a quick example, here I'll jump into a container and new up a quick console app and run it, just to prove I can. This work will be thrown away when I exit the container.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk
root@063f3c50c88a:/# ls
bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
root@063f3c50c88a:/# cd ~
root@063f3c50c88a:~# mkdir mytest
root@063f3c50c88a:~# cd mytest/
root@063f3c50c88a:~/mytest# dotnet new console
The template "Console Application" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on /root/mytest/mytest.csproj...
Restoring packages for /root/mytest/mytest.csproj...
Installing Microsoft.NETCore.DotNetAppHost 2.1.0-rc1.
Installing Microsoft.NETCore.DotNetHostResolver 2.1.0-rc1.
Installing NETStandard.Library 2.0.3.
Installing Microsoft.NETCore.DotNetHostPolicy 2.1.0-rc1.
Installing Microsoft.NETCore.App 2.1.0-rc1.
Installing Microsoft.NETCore.Platforms 2.1.0-rc1.
Installing Microsoft.NETCore.Targets 2.1.0-rc1.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.props.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.targets.
Restore completed in 15.8 sec for /root/mytest/mytest.csproj.

Restore succeeded.
root@063f3c50c88a:~/mytest# dotnet run
Hello World!
root@063f3c50c88a:~/mytest# dotnet exec bin/Debug/netcoreapp2.1/mytest.dll
Hello World!

If you try it yourself, you'll note that "dotnet run" isn't very fast. That's because it does a restore, build, and run. Compilation isn't super quick on these tiny devices. You'll want to do as little work as possible. Rather than a "dotnet run" all the time, I'll do a "dotnet build" then a "dotnet exec" which is very fast.

If you're doing to do Docker and .NET Core, I can't stress enough how useful the resources are over at https://github.com/dotnet/dotnet-docker.

Building .NET Core Apps with Docker .NET Core Docker Sample - This sample builds, tests, and runs the sample. It includes and builds multiple projects. ASP.NET Core Docker Sample - This sample demonstrates using Docker with an ASP.NET Core Web App. Develop .NET Core Apps in a Container Develop .NET Core Applications - This sample shows how to develop, build and test .NET Core applications with Docker without the need to install the .NET Core SDK. Develop ASP.NET Core Applications - This sample shows how to develop and test ASP.NET Core applications with Docker without the need to install the .NET Core SDK. Optimizing Container Size .NET Core Alpine Docker Sample - This sample builds, tests, and runs an application using Alpine. .NET Core self-contained Sample - This sample builds and runs an application as a self-contained application. ARM32 / Raspberry Pi .NET Core ARM32 Docker Sample - This sample builds and runs an application with Debian on ARM32 (works on Raspberry Pi). ASP.NET Core ARM32 Docker Sample - This sample builds and runs an ASP.NET Core application with Debian on ARM32 (works on Raspberry Pi).

I found the samples to be super useful...be sure to dig into the Dockerfiles themselves as it'll give you a ton of insight into how to structure your own files. Being able to do Multistage Dockerfiles is crucial when building on a small device like a RPi. You want to do as little work as possible and let Docker cache as many layers with its internal "smarts." If you're not thoughtful about this, you'll end up wasting 10x the time building image layers every build.

Dockerizing a real ASP.NET Core Site with tests!

Can I take my podcast site and Dockerize it and build/test/run it on a Raspberry Pi? YES.

FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY hanselminutes.core/*.csproj ./hanselminutes.core/
COPY hanselminutes.core.tests/*.csproj ./hanselminutes.core.tests/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/hanselminutes.core
RUN dotnet build


FROM build AS testrunner
WORKDIR /app/hanselminutes.core.tests
ENTRYPOINT ["dotnet", "test", "--logger:trx"]


FROM build AS test
WORKDIR /app/hanselminutes.core.tests
RUN dotnet test


FROM build AS publish
WORKDIR /app/hanselminutes.core
RUN dotnet publish -c Release -o out


FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=publish /app/hanselminutes.core/out ./
ENTRYPOINT ["dotnet", "hanselminutes.core.dll"]

Love it. Now I can "docker build ." on my Raspberry Pi. It will restore, test, and build. If the tests fail, the Docker build will fail.

See how there's an extra section up there called "testrunner" and then after it is "test?" That testrunner section is a no-op. It sets an ENTRYPOINT but it is never used...yet. The ENTRYPOINT is an implicit run if it is the last line in the Dockerfile. That's there so I can "Run up to it" if I want to.

I can just build and run like this:

docker build -t podcast .
docker run --rm -it -p 8000:80 podcast

NOTE/GOTCHA: Note that the "runtime" image is microsoft/dotnet:2.1-aspnetcore-runtime, not microsoft/dotnet:2.1-runtime. That aspnetcore one pre-includes the binaries I need for running an ASP.NET app, that way I can just include a single reference to "<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0-rc1-final" />" in my csproj. If didn't use the aspnetcore-runtime base image, I'd need to manually pull in all the ASP.NET Core packages that I want. Using the base image might make the resulting image files larger, but it's a balance between convenience and size. It's up to you. You can manually include just the packages you need, or pull in the "Microsoft.AspNetCore.App" meta-package for convenience. My resulting "podcast" image ended up 205megs, so not to bad, but of course if I wanted I could trim in a number of ways.

Or, if I JUST want test results from Docker, I can do this! That means I can run the tests in the Docker container, mount a volume between the Linux container and (theoretical) Window host, and then open the .trx resulting file in Visual Studio!

docker build --pull --target testrunner -t podcast:test .
docker run --rm -v D:\github\hanselminutes-core\TestResults:/app/hanselminutes.core.tests/TestResults podcast:test

Check it out! These are the test results from the tests that ran within the Linux Container:

XUnit Tests from within a Docker Container on Linux viewed within Visual Studio on Windows

Here's the result. I've now got my Podcast website running in Docker on an ARM32 Raspberry Pi 3 with just an hours' work (writing the Dockerfile)!

It's my podcast site running under Docker on .NET Core 2.1 on a Raspberry Pi

Second - did you make it this far down? - You can just install the .NET Core 2.1 SDK "on the metal." No Docker, just get the tar.gz and set it up. Looking at the RPi ARM32v7 Dockerfile, I can install it on the metal like this. Note I'm getting the .NET Core SDK *and* the ASP.NET Core shared runtime. In the final release build you will just get the SDK and it'll include everything, including ASP.NET.

$ sudo apt-get -y update
$ sudo apt-get -y install libunwind8 gettext
$ wget https://dotnetcli.blob.core.windows.net/dotnet/Sdk/2.1.300-rc1-008673/dotnet-sdk-2.1.300-rc1-008673-linux-arm.tar.gz
$ wget https://dotnetcli.blob.core.windows.net/dotnet/aspnetcore/Runtime/2.1.0-rc1-final/aspnetcore-runtime-2.1.0-rc1-final-linux-arm.tar.gz
$ sudo mkdir /opt/dotnet
$ sudo tar -xvf dotnet-sdk-2.1.300-rc1-008673-linux-arm.tar.gz -C /opt/dotnet/
$ sudo tar -xvf aspnetcore-runtime-2.1.0-rc1-final-linux-arm.tar.gz -C /opt/dotnet/
$ sudo ln -s /opt/dotnet/dotnet /usr/local/bin
$ dotnet --info

Cross-platform for the win!

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Using LazyCache for clean and simple .NET Core in-memory caching

May 12, 2018

Description:

Tai Chi by Luisen Rodrigo - Used under CCI'm continuing to use .NET Core 2.1 to power my Podcast Site, and I've done a series of posts on some of the experiments I've been doing. I also upgraded to .NET Core 2.1 RC that came out this week. Here's some posts if you want to catch up:

Eyes wide open - Correct Caching is always hard The Programmer's Hindsight - Caching with HttpClientFactory and Polly Part 2 Adding Cross-Cutting Memory Caching to an HttpClientFactory in ASP.NET Core with Polly Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly HttpClientFactory for typed HttpClient instances in ASP.NET Core 2.1 Updating jQuery-based Lazy Image Loading to IntersectionObserver Automatic Unit Testing in .NET Core plus Code Coverage in Visual Studio Code Setting up Application Insights took 10 minutes. It created two days of work for me. Upgrading my podcast site to ASP.NET Core 2.1 in Azure plus some Best Practices

Having a blast, if I may say so.

I've been trying a number of ways to cache locally. I have an expensive call to a backend (7-8 seconds or more, without deserialization) so I want to cache it locally for a few hours until it expires. I have a way that work very well using a SemaphoreSlim. There's some issues to be aware of but it has been rock solid. However, in the comments of the last caching post a number of people suggested I use "LazyCache."

Alastair from the LazyCache team said this in the comments:

LazyCache wraps your "build stuff I want to cache" func in a Lazy<> or an AsyncLazy<> before passing it into MemoryCache to ensure the delegate only gets executed once as you retrieve it from the cache. It also allows you to swap between sync and async for the same cached thing. It is just a very thin wrapper around MemoryCache to save you the hassle of doing the locking yourself. A netstandard 2 version is in pre-release.
Since you asked the implementation is in CachingService.cs#L119 and proof it works is in CachingServiceTests.cs#L343

Nice! Sounds like it's worth trying out. Most importantly, it'll allow me to "refactor via subtraction."

I want to have my "GetShows()" method go off and call the backend "database" which is a REST API over HTTP living at SimpleCast.com. That backend call is expensive and doesn't change often. I publish new shows every Thursday, so ideally SimpleCast would have a standard WebHook and I'd cache the result forever until they called me back. For now I will just cache it for 8 hours - a long but mostly arbitrary number. Really want that WebHook as that's the correct model, IMHO.

LazyCache was added on my Configure in Startup.cs:

services.AddLazyCache();

Kind of anticlimactic. ;)

Then I just make a method that knows how to populate my cache. That's just a "Func" that returns a Task of List of Shows as you can see below. Then I call IAppCache's "GetOrAddAsync" from LazyCache that either GETS the List of Shows out of the Cache OR it calls my Func, does the actual work, then returns the results. The results are cached for 8 hours. Compare this to my previous code and it's a lot cleaner.

public class ShowDatabase : IShowDatabase { private readonly IAppCache _cache; private readonly ILogger _logger; private SimpleCastClient _client; public ShowDatabase(IAppCache appCache, ILogger<ShowDatabase> logger, SimpleCastClient client) { _client = client; _logger = logger; _cache = appCache; } public async Task<List<Show>> GetShows() { Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache(); var retVal = await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8)); return retVal; } private async Task<List<Show>> PopulateShowsCache() { List<Show> shows = await _client.GetShows(); _logger.LogInformation($"Loaded {shows.Count} shows"); return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList(); } }

It's always important to point out there's a dozen or more ways to do this. I'm not selling a prescription here or The One True Way, but rather exploring the options and edges and examining the trade-offs.

As mentioned before, me using "shows" as a magic string for the key here makes no guarantees that another co-worker isn't also using "shows" as the key. Solution? Depends. I could have a function-specific unique key but that only ensures this function is fast twice. If someone else is calling the backend themselves I'm losing the benefits of a centralized (albeit process-local - not distributed like Redis) cache. I'm also caching the full list and then doing a where/filter every time. A little sloppiness on my part, but also because I'm still feeling this area out. Do I want to cache the whole thing and then let the callers filter? Or do I want to have GetShows() and GetActiveShows()? Dunno yet. But worth pointing out. There's layers to caching. Do I cache the HttpResponse but not the deserialization? Here I'm caching the List<Shows>, complete. I like caching List<T> because a caller can query it, although I'm sending back just active shows (see above). Another perspective is to use the <cache> TagHelper in Razor and cache Razor's resulting rendered HTML. There is value in caching the object graph, but I need to think about perhaps caching both List<T> AND the rendered HTML. I'll explore this next.

I'm enjoying myself though. ;)

Go explore LazyCache! I'm using beta2 but there's a whole number of releases going back years and it's quite stable so far.

Lazy cache is a simple in-memory caching service. It has a developer friendly generics based API, and provides a thread safe cache implementation that guarantees to only execute your cachable delegates once (it's lazy!). Under the hood it leverages ObjectCache and Lazy to provide performance and reliability in heavy load scenarios.

For ASP.NET Core it's quick to experiment with LazyCache and get it set up. Give it a try, and share your favorite caching techniques in the comments.

Tai Chi photo by Luisen Rodrigo used under Creative Commons Attribution 2.0 Generic (CC BY 2.0), thanks!

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Announcing .NET Core 2.1 RC 1 Go Live AND .NET Core 3.0 Futures

May 10, 2018

Description:

I just got back from the Microsoft BUILD Conference where Scott Hunter and I announced both .NET Core 2.1 RC1 AND talked about .NET Core 3.0 in the future.

.NET Core 2.1 RC1

First, .NET Core 2.1's Release Candidate is out. This one has a Go Live license and it's very close to release.

You can download and get started with .NET Core 2.1 RC 1, on Windows, macOS, and Linux: .NET Core 2.1 RC 1 SDK (includes the runtime) .NET Core 2.1 RC 1 Runtime

You can see complete details of the release in the .NET Core 2.1 RC 1 release notes. Related instructions, known issues, and workarounds are included in releases notes. Please report any issues you find in the comments or at dotnet/core #1506. ASP.NET Core 2.1 RC 1 and Entity Framework 2.1 RC 1 are also releasing today. You can develop .NET Core 2.1 apps with Visual Studio 2017 15.7, Visual Studio for Mac 7.5, or Visual Studio Code.

Here's a deep dive on the performance benefits which are SIGNIFICANT. It's also worth noting that you can get 2x+ speed improvements for your builds/compiles, by using the .NET Core 2.1 RC SDK for building while continuing to target earlier .NET Core releases, like 2.0 for the Runtime. Go Live - You can put this version in production and get support. Alpine Support - There are docker images at 2.1-sdk-alpine and 2.1-runtime-alpine. ARM Support - We can compile on Raspberry Pi now! .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53. There are even Docker images for ARM32 Brotli Support - new lossless compression algo for the web. Tons of new Crypto Support. Source Debugging from NuGet Packages (finally!) called "SourceLink." .NET Core Global Tools:dotnet tool install -g dotnetsay dotnetsay

In fact, if you have Docker installed go try an ASP.NET Sample:

docker pull microsoft/dotnet-samples:aspnetapp docker run --rm -it -p 8000:80 --name aspnetcore_sample microsoft/dotnet-samples:aspnetapp .NET Core 3.0

This is huge. You'll soon be able to take your existing WinForms and WPF app (I did this with a 12 year old WPF app!) and swap out the underlying runtime. That means you can run WinForms and WPF on .NET Core 3 on Windows.

"Bringing desktop workloads to run on the top of .NET Core is great. We would love to close the loop and open source them as well. We are investigating how to do that." - Scott Hunter, Director PM, .NET, Microsoft

Why is this cool?

WinForms/WPF apps can be self-contained and run in a single folder.

No need to install anything, just xcopy deploy. WinFormsApp1 can't affect WPFApp2 because they can each target their own .NET Core 3 version. Updates to the .NET Framework on Windows are system-wide and can sometimes cause problems with legacy apps. You'll now have total control and update apps one at at time and they can't affect each other. C#, F# and VB already work with .NET Core 2.0. You will be able to build desktop applications with any of those three languages with .NET Core 3.

Secondly, you'll get to use all the new C# 7.x+ (and beyond) features sooner than ever. .NET Core moves fast but you can pick and choose the language features and libraries you want. For example, I can update BabySmash (my .NET 3.5 WPF app) to .NET Core 3.0 and use new C# features AND bring in UWP Controls that didn't exist when BabySmash was first written! WinForms and WPF apps will also get the new lightweight csproj format. More details here and a full video below.

Compile to a single EXE

Even more, why not compile the whole app into a single EXE. I can make BabySmash.exe and it'll just work. No install, everything self-contained.

.NET Core 3 will still be cross platform, but WinForms and WPF remain "W is for Windows" - the runtime is swappable, but they still P/Invoke into the Windows APIs. You can look elsewhere for .NET Core cross-platform UI apps with frameworks like Avalonia, Ooui, and Blazor.

Diagram showing that .NET Core will support Windows UI Frameworks

You can check out the video from BUILD here. We show 2.1, 3.0, and some amazing demos like compiling a .NET app into a single exe and running it on a computer from the audience, as well as taking the 12 year old BabySmash WPF app and running it on .NET Core 3.0 PLUS adding a UWP Touch Ink Control!

Lots of cool stuff coming today AND tomorrow with open source .NET Core!

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Eyes wide open - Correct Caching is always hard

May 3, 2018

Description:

Image from Pixabay used under Creative CommonsIn my last post I talked about Caching and some of the stuff I've been doing to cache the results of a VERY expensive call to the backend that hosts my podcast.

As always, the comments are better than the post! Thanks to you, Dear Reader.

The code is below. Note that the MemoryCache is a singleton, but within the process. It is not (yet) a DistributedCache. Also note that Caching is Complex(tm) and that thousands of pages have been written about caching by smart people. This is a blog post as part of a series, so use your head and do your research. Don't take anyone's word for it.

Bill Kempf had an excellent comment on that post. Thanks Bill! He said:

The SemaphoreSlim is a bad idea. This "mutex" has visibility different from the state it's trying to protect. You may get away with it here if this is the only code that accesses that particular key in the cache, but work or not, it's a bad practice.
As suggested, GetOrCreate (or more appropriate for this use case, GetOrCreateAsync) should handle the synchronization for you.

My first reaction was, "bad idea?! Nonsense!" It took me a minute to parse his words and absorb. Ok, it took a few hours of background processing plus I had lunch.

Again, here's the code in question. I've removed logging for brevity. I'm also deeply not interested in your emotional investment in my brackets/braces style. It changes with my mood. ;)

public class ShowDatabase : IShowDatabase
{
private readonly IMemoryCache _cache;
private readonly ILogger _logger;
private SimpleCastClient _client;

public ShowDatabase(IMemoryCache memoryCache,
ILogger<ShowDatabase> logger,
SimpleCastClient client){
_client = client;
_logger = logger;
_cache = memoryCache;
}

static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);

public async Task<List<Show>> GetShows() {
Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;

var cacheKey = "showsList";
List<Show> shows = null;

//CHECK and BAIL - optimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
return shows.Where(whereClause).ToList();
}

await semaphoreSlim.WaitAsync();
try
{
//RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
return shows.Where(whereClause).ToList();
}

shows = await _client.GetShows();

var cacheExpirationOptions = new MemoryCacheEntryOptions();
cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
cacheExpirationOptions.Priority = CacheItemPriority.Normal;

_cache.Set(cacheKey, shows, cacheExpirationOptions);
return shows.Where(whereClause).ToList(); ;
}
catch (Exception e) {
throw;
}
finally {
semaphoreSlim.Release();
}
}
}

public interface IShowDatabase {
Task<List<Show>> GetShows();
}

SemaphoreSlim IS very useful. From the docs:

The System.Threading.Semaphore class represents a named (systemwide) or local semaphore. It is a thin wrapper around the Win32 semaphore object. Win32 semaphores are counting semaphores, which can be used to control access to a pool of resources.

The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short. SemaphoreSlim relies as much as possible on synchronization primitives provided by the common language runtime (CLR). However, it also provides lazily initialized, kernel-based wait handles as necessary to support waiting on multiple semaphores. SemaphoreSlim also supports the use of cancellation tokens, but it does not support named semaphores or the use of a wait handle for synchronization.

And my use of a Semaphore here is correct...for some definitions of the word "correct." ;) Back to Bill's wise words:

You may get away with it here if this is the only code that accesses that particular key in the cache, but work or not, it's a bad practice.

Ah! In this case, my cacheKey is "showsList" and I'm "protecting" it with a lock and double-check. That lock/check is fine and appropriate HOWEVER I have no guarantee (other than I wrote the whole app) that some other thread is also accessing the same IMemoryCache (remember, process-scoped singleton) at the same time. It's protected only within this function!

Here's where it gets even more interesting. I could make my own IMemoryCache, wrap things up, and then protect inside with my own TryGetValues...but then I'm back to checking/doublechecking etc. However, while I could lock/protect on a key...what about the semantics of other cached values that may depend on my key. There are none, but you could see a world where there are.

Yes, we are getting close to making our own implementation of Redis here, but bear with me. You have to know when to stop and say it's correct enough for this site or project BUT as Bill and the commenters point out, you also have to be Eyes Wide Open about the limitations and gotchas so they don't bite you as your app expands!

The suggestion was made to use the GetOrCreateAsync() extension method for MemoryCache. Bill and other commenters said:

As suggested, GetOrCreate (or more appropriate for this use case, GetOrCreateAsync) should handle the synchronization for you.

Sadly, it doesn't work that way. There's no guarantee (via locking like I was doing) that the factory method (the thing that populates the cache) won't get called twice. That is, someone could TryGetValue, get nothing, and continue on, while another thread is already in line to call the factory again.

public static async Task<TItem> GetOrCreateAsync<TItem>(this IMemoryCache cache, object key, Func<ICacheEntry, Task<TItem>> factory)
{
if (!cache.TryGetValue(key, out object result))
{
var entry = cache.CreateEntry(key);
result = await factory(entry);
entry.SetValue(result);
// need to manually call dispose instead of having a using
// in case the factory passed in throws, in which case we
// do not want to add the entry to the cache
entry.Dispose();
}

return (TItem)result;
}

Is this the end of the world? Not at all. Again, what is your project's definition of correct? Computer science correct? Guaranteed to always work correct? Spec correct? Mostly works and doesn't crash all the time correct?

Do I want to: Actively and aggressively avoid making my expensive backend call at the risk of in fact having another part of the app make that call anyway? What I am doing with my cacheKey is clearly not a "best practice" although it works today. Accept that my backend call could happen twice in short succession and the last caller's thread would ultimately populate the cache. My code would become a dozen lines simpler, have no process-wide locking, but also work adequately. However, it would be naïve caching at best. Even ConcurrentDictionary has no guarantees - "it is always possible for one thread to retrieve a value, and another thread to immediately update the collection by giving the same key a new value."

What a fun discussion. What are your thoughts?

Sponsor: SparkPost’s cloud email APIs and C# library make it easy for you to add email messaging to your .NET applications and help ensure your messages reach your user’s inbox on time. Get a free developer account and a head start on your integration today!


© 2018 Scott Hanselman. All rights reserved.
     

The Programmer's Hindsight - Caching with HttpClientFactory and Polly Part 2

May 2, 2018

Description:

Hindsight - by Nate Steiner - Public DomainIn my last blog post Adding Cross-Cutting Memory Caching to an HttpClientFactory in ASP.NET Core with Polly I actually failed to complete my mission. I talked to a few people (thanks Dylan and Damian and friends) and I think my initial goal may have been wrong.

I thought I wanted "magic add this policy and get free caching" for HttpClients that come out of the new .NET Core 2.1 HttpClientFactory, but first, nothing is free, and second, everything (in computer science) is layered. Am I caching the right thing at the right layer?

The good thing that come out of explorations and discussions like this is Better Software. Given that I'm running Previews/Daily Builds of both .NET Core 2.1 (in preview as of the time of this writing) and Polly (always under active development) I realize I'm on some kind of cutting edge. The bad news (and it's not really bad) is that everything I want to do is possible it's just not always easy. For example, a lot of "hooking up" happens when one make a C# Extension Method and adds in into the ASP.NET Middleware Pipeline with "services.AddSomeStuffThatIsTediousButUseful()."

Polly and ASP.NET Core are insanely configurable, but I'm personally interested in the 80% or even the 90% case. The 10% will definitely require you/me to learn more about the internals of the system, while the 90% will ideally be abstracted away from the average developer (me).

I've had a Skype with Dylan from Polly and he's been updating the excellent Polly docs as we walk around how caching should work in an HttpClientFactory world. Fantastic stuff, go read it. I'll steal some here:

ASPNET Core 2.1 - What is HttpClient factory?

From ASPNET Core 2.1, Polly integrates with IHttpClientFactory. HttpClient factory is a factory that simplifies the management and usage of HttpClient in four ways. It:

allows you to name and configure logical HttpClients. For instance, you may configure a client that is pre-configured to access the github API;

manages the lifetime of HttpClientMessageHandlers to avoid some of the pitfalls associated with managing HttpClient yourself (the dont-dispose-it-too-often but also dont-use-only-a-singleton aspects);

provides configurable logging (via ILogger) for all requests and responses performed by clients created with the factory;

provides a simple API for adding middleware to outgoing calls, be that for logging, authorisation, service discovery, or resilience with Polly.

The Microsoft early announcement speaks more to these topics, and Steve Gordon's pair of blog posts (1; 2) are also an excellent read for deeper background and some great worked examples.

Polly and Polly policies work great with ASP.NET Core 2.1 and integrated nicely. I'm sure it will integrate even more conveniently with a few smart Extension Methods to abstract away the hard parts so we can fall into the "pit of success."

Caching with Polly and HttpClient

Here's where it gets interesting. To me. Or, you, I suppose, Dear Reader, if you made it this far into a blog post (and sentence) with too many commas.

This is a salient and important point:

Polly is generic (not tied to Http requests)

Now, this is where I got in trouble:

Caching with Polly CachePolicy in a DelegatingHandler caches at the HttpResponseMessage level

I ended up caching an HttpResponseMessage...but it has a "stream" inside it at HttpResponseMessage.Content. It's meant to be read once. Not cached. I wasn't caching a string, or some JSON, or some deserialized JSON objects, I ended up caching what's (effectively) an ephemeral one-time object and then de-serializing it every time. I mean, it's cached, but why am I paying the deserialization cost on every Page View?

The Programmer's Hindsight: This is such a classic programming/coding experience. Yesterday this was opaque and confusing. I didn't understand what was happening or why it was happening. Today - with The Programmer's Hindsight - I know exactly where I went wrong and why. Like, how did I ever think this was gonna work? ;)

As Dylan from Polly so wisely points out:

It may be more appropriate to cache at a level higher-up. For example, cache the results of stream-reading and deserializing to the local type your app uses. Which, ironically, I was already doing in my original code. It just felt heavy. Too much caching and too little business. I am trying to refactor it away and make it more generic!

This is my "ShowDatabase" (just a JSON file) that wraps my httpClient

public class ShowDatabase : IShowDatabase { private readonly IMemoryCache _cache; private readonly ILogger _logger; private SimpleCastClient _client; public ShowDatabase(IMemoryCache memoryCache, ILogger<ShowDatabase> logger, SimpleCastClient client) { _client = client; _logger = logger; _cache = memoryCache; } static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1); public async Task<List<Show>> GetShows() { Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow; var cacheKey = "showsList"; List<Show> shows = null; //CHECK and BAIL - optimistic if (_cache.TryGetValue(cacheKey, out shows)) { _logger.LogDebug($"Cache HIT: Found {cacheKey}"); return shows.Where(whereClause).ToList(); } await semaphoreSlim.WaitAsync(); try { //RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic if (_cache.TryGetValue(cacheKey, out shows)) { _logger.LogDebug($"Amazing Speed Cache HIT: Found {cacheKey}"); return shows.Where(whereClause).ToList(); } _logger.LogWarning($"Cache MISS: Loading new shows"); shows = await _client.GetShows(); _logger.LogWarning($"Cache MISS: Loaded {shows.Count} shows"); _logger.LogWarning($"Cache MISS: Loaded {shows.Where(whereClause).ToList().Count} PUBLISHED shows"); var cacheExpirationOptions = new MemoryCacheEntryOptions(); cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4); cacheExpirationOptions.Priority = CacheItemPriority.Normal; _cache.Set(cacheKey, shows, cacheExpirationOptions); return shows.Where(whereClause).ToList(); ; } catch (Exception e) { _logger.LogCritical("Error getting episodes!"); _logger.LogCritical(e.ToString()); _logger.LogCritical(e?.InnerException?.ToString()); throw; } finally { semaphoreSlim.Release(); } } } public interface IShowDatabase { Task<List<Show>> GetShows(); }

I'll move a bunch of this into some generic helpers for myself, or I'll use Akavache, or I'll try another Polly Cache Policy implemented farther up the call stack! Thanks for reading my ramblings!

UPDATE: Be sure to read the comments below AND my response in Part 2.

Sponsor: SparkPost’s cloud email APIs and C# library make it easy for you to add email messaging to your .NET applications and help ensure your messages reach your user’s inbox on time. Get a free developer account and a head start on your integration today!


© 2018 Scott Hanselman. All rights reserved.
     

Adding Cross-Cutting Memory Caching to an HttpClientFactory in ASP.NET Core with Polly

Apr 27, 2018

Description:

5480805977_27d92598ca_oCouple days ago I Added Resilience and Transient Fault handling to your .NET Core HttpClient with Polly. Polly provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call. That means I can say things like "Call this API and try 3 times if there's any issues before you panic," as an example. It lets me move the cross-cutting concerns - the policies - out of the business part of the code and over to a central location or module, or even configuration if I wanted.

I've been upgrading my podcast of late at https://www.hanselminutes.com to ASP.NET Core 2.1 and Razor Pages with SimpleCast for the back end. Since I've removed SQL Server as my previous data store and I'm now sitting entirely on top of a third party API I want to think about how often I call this API. As a rule, there's no reason to call it any more often that a change might occur.

I publish a new show every Thursday at 5pm PST, so I suppose I could cache the feed for 7 days, but sometimes I make description changes, add links, update titles, etc. The show gets many tens of thousands of hits per episode so I definitely don't want to abuse the SimpleCast API, so I decided that caching for 4 hours seemed reasonable.

I went and wrote a bunch of caching code on my own. This is fine and it works and has been in production for a few months without any issues.

A few random notes:

Stuff is being passed into the Constructor by the IoC system built into ASP.NET Core That means the HttpClient, Logger, and MemoryCache are handed to this little abstraction. I don't new them up myself All my "Show Database" is, is a GetShows() That means I have TestDatabase that implements IShowDatabase that I use for some Unit Tests. And I could have multiple implementations if I liked. Caching here is interesting. Sure I could do the caching in just a line or two, but a caching double check is more needed that one often realizes. I check the cache, and if I hit it, I am done and I bail. Yay! If not, Let's wait on a semaphoreSlim. This a great, simple way to manage waiting around a limited resource. I don't want to accidentally have two threads call out to the SimpleCast API if I'm literally in the middle of doing it already. "The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short." So I check again inside that block to see if it showed up in the cache in the space between there and the previous check. Doesn't hurt to be paranoid. Got it? Cool. Store it away and release as we finally the try.

Don't copy paste this. My GOAL is to NOT have to do any of this, even though it's valid.

public class ShowDatabase : IShowDatabase
{
private readonly IMemoryCache _cache;
private readonly ILogger _logger;
private SimpleCastClient _client;

public ShowDatabase(IMemoryCache memoryCache,
ILogger<ShowDatabase> logger,
SimpleCastClient client)
{
_client = client;
_logger = logger;
_cache = memoryCache;
}

static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);

public async Task<List<Show>> GetShows()
{
Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;

var cacheKey = "showsList";
List<Show> shows = null;

//CHECK and BAIL - optimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
_logger.LogDebug($"Cache HIT: Found {cacheKey}");
return shows.Where(whereClause).ToList();
}

await semaphoreSlim.WaitAsync();
try
{
//RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
_logger.LogDebug($"Amazing Speed Cache HIT: Found {cacheKey}");
return shows.Where(whereClause).ToList();
}

_logger.LogWarning($"Cache MISS: Loading new shows");
shows = await _client.GetShows();
_logger.LogWarning($"Cache MISS: Loaded {shows.Count} shows");
_logger.LogWarning($"Cache MISS: Loaded {shows.Where(whereClause).ToList().Count} PUBLISHED shows");

var cacheExpirationOptions = new MemoryCacheEntryOptions();
cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
cacheExpirationOptions.Priority = CacheItemPriority.Normal;

_cache.Set(cacheKey, shows, cacheExpirationOptions);
return shows.Where(whereClause).ToList(); ;
}
catch (Exception e)
{
_logger.LogCritical("Error getting episodes!");
_logger.LogCritical(e.ToString());
_logger.LogCritical(e?.InnerException?.ToString());
throw;
}
finally
{
semaphoreSlim.Release();
}
}
}

public interface IShowDatabase
{
Task<List<Show>> GetShows();
}

Again, this is great and it works fine. But the BUSINESS is in _client.GetShows() and the rest is all CEREMONY. Can this be broken up? Sure, I could put stuff in a base class, or make an extension method and bury it in there, so use Akavache or make a GetOrFetch and start passing around Funcs of "do this but check here first":

IObservable<T> GetOrFetchObject<T>(string key, Func<Task<T>> fetchFunc, DateTimeOffset? absoluteExpiration = null);

Could I use Polly and refactor via subtraction?

Per the Polly docs:

The Polly CachePolicy is an implementation of read-through cache, also known as the cache-aside pattern. Providing results from cache where possible reduces overall call duration and can reduce network traffic.

First, I'll remove all my own caching code and just make the call on every page view. Yes, I could write the Linq a few ways. Yes, this could all be one line. Yes, I like Logging.

public async Task<List<Show>> GetShows()
{
_logger.LogInformation($"Loading new shows");
List<Show> shows = await _client.GetShows();
_logger.LogInformation($"Loaded {shows.Count} shows");
return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList(); ;
}

No caching, I'm doing The Least.

Polly supports both the .NET MemoryCache that is per process/per node, an also .NET Core's IDistributedCache for having one cache that lives somewhere shared like Redis or SQL Server. Since my podcast is just one node, one web server, and it's low-CPU, I'm not super worried about it. If Azure WebSites does end up auto-scaling it, sure, this cache strategy will happen n times. I'll switch to Distributed if that becomes a problem.

I'll add a reference to Polly.Caching.MemoryCache in my project.

I ensure I have the .NET Memory Cache in my list of services in ConfigureServices in Startup.cs:

services.AddMemoryCache(); STUCK...for now!

AND...here is where I'm stuck. I got this far into the process and now I'm either confused OR I'm in a Chicken and the Egg Situation.

Forgive me, friends, and Polly authors, as this Blog Post will temporarily turn into a GitHub Issue. Once I work through it, I'll update this so others can benefit. And I still love you; no disrespect intended.

The Polly.Caching.MemoryCache stuff is several months old, and existed (and worked) well before the new HttpClientFactory stuff I blogged about earlier.

I would LIKE to add my Polly Caching Policy chained after my Transient Error Policy:

services.AddHttpClient<SimpleCastClient>().
AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 2,
durationOfBreak: TimeSpan.FromMinutes(1)
)).
AddPolicyHandlerFromRegistry("myCachingPolicy"); //WHAT I WANT?

However, that policy hasn't been added to the Policy Registry yet. It doesn't exist! This makes me feel like some of the work that is happening in ConfigureServices() is a little premature. ConfigureServices() is READY, AIM and Configure() is FIRE/DO-IT, in my mind.

If I set up a Memory Cache in Configure, I need to use the Dependency System to get the stuff I want, like the .NET Core IMemoryCache that I put in services just now.

public void Configure(IApplicationBuilder app, IPolicyRegistry<string> policyRegistry, IMemoryCache memoryCache)
{
MemoryCacheProvider memoryCacheProvider = new MemoryCacheProvider(memoryCache);
var cachePolicy = Policy.CacheAsync(memoryCacheProvider, TimeSpan.FromMinutes(5));
policyRegistry.Add("cachePolicy", cachePolicy);
...

But at this point, it's too late! I need this policy up earlier...or I need to figure a way to tell the HttpClientFactory to use this policy...but I've been using extension methods in ConfigureServices to do that so far. Perhaps some exception methods are needed like AddCachingPolicy(). However, from what I can see:

This code can't work with the ASP.NET Core 2.1's HttpClientFactory pattern...yet. https://github.com/App-vNext/Polly.Caching.MemoryCache I could manually new things up, but I'm already deep into Dependency Injection...I don't want to start newing things and get into scoping issues. There appear to be changes between  v5.4.0 and 5.8.0. Still looking at this. Bringing in the Microsoft.Extensions.Http.Polly package brings in Polly-Signed 5.8.0... But bringing in Polly.Caching.MemoryCache brings in an unsigned 5.4.0 All I need is https://github.com/App-vNext/Polly.Caching.MemoryCache/blob/master/src/Polly.Caching.MemoryCache.Shared/MemoryCacheProvider.cs so I could bring that in an compile it myself (to unblock myself) but then I have to #define PORTABLE. I shouldn't need to do any of this. So Polly.Caching.MemoryCache needs to be updated for 5.8 (signed?) or not...not sure. This signed/unsigned thing is always bad.

I'm likely bumping into a point in time thing. I will head to bed and return to this post in a day and see if I (or you, Dear Reader) have solved the problem in my sleep.

"code making and code breaking" by earthlightbooks - Licensed under CC-BY 2.0 - Original source via Flickr

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly

Apr 24, 2018

Description:

b30f5128-181e-11e6-8780-bc9e5b17685eLast week while upgrading my podcast site to ASP.NET Core 2.1 and .NET. Core 2.1 I moved my Http Client instances over to be created by the new HttpClientFactory. Now I have a single central place where my HttpClient objects are created and managed, and I can set policies as I like on each named client.

It really can't be overstated how useful a resilience framework for .NET Core like Polly is.

Take some code like this that calls a backend REST API:

public class SimpleCastClient
{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com");
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
var res = await _client.GetAsync(episodesUrl);
return await res.Content.ReadAsAsync<List<Show>>();
}
}

Now consider what it takes to add things like

Retry n times - maybe it's a network blip Circuit-breaker - Try a few times but stop so you don't overload the system. Timeout - Try, but give up after n seconds/minutes Cache - You asked before! I'm going to do a separate blog post on this because I wrote a WHOLE caching system and I may be able to "refactor via subtraction."

If I want features like Retry and Timeout, I could end up littering my code. OR, I could put it in a base class and build a series of HttpClient utilities. However, I don't think I should have to do those things because while they are behaviors, they are really cross-cutting policies. I'd like a central way to manage HttpClient policy!

Enter Polly. Polly is an OSS library with a lovely Microsoft.Extensions.Http.Polly package that you can use to combine the goodness of Polly with ASP.NET Core 2.1.

As Dylan from the Polly Project says:

HttpClientFactory in ASPNET Core 2.1 provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call.

I just went into my Startup.cs and changed this

services.AddHttpClient<SimpleCastClient>();

to this (after adding "using Polly;" as a namespace)

services.AddHttpClient<SimpleCastClient>().
AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.RetryAsync(2));

and now I've got Retries. Change it to this:

services.AddHttpClient<SimpleCastClient>().
AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 2,
durationOfBreak: TimeSpan.FromMinutes(1)
));

And now I've got CircuitBreaker where it backs off for a minute if it's broken (hit a handled fault) twice!

I like AddTransientHttpErrorPolicy because it automatically handles Http5xx's and Http408s as well as the occasional System.Net.Http.HttpRequestException. I can have as many named or typed HttpClients as I like and they can have all kinds of specific policies with VERY sophisticated behaviors. If those behaviors aren't actual Business Logic (tm) then why not get them out of your code?

Go read up on Polly at https://githaub.com/App-vNext/Polly and check out the extensive samples at https://github.com/App-vNext/Polly-Samples/tree/master/PollyTestClient/Samples.

Even though it works great with ASP.NET Core 2.1 (best, IMHO) you can use Polly with .NET 4, .NET 4.5, or anything that's compliant with .NET Standard 1.1.

Gotchas

A few things to remember. If you are POSTing to an endpoint and applying retries, you want that operation to be idempotent.

"From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result."

But everyone's API is different. What would happen if you applied a Polly Retry Policy to an HttpClient and it POSTed twice? Is that backend behavior compatible with your policies? Know what the behavior you expect is and plan for it. You may want to have a GET policy and a post one and use different HttpClients. Just be conscious.

Next, think about Timeouts. HttpClient's have a Timeout which is "all tries overall timeout" while a TimeoutPolicy inside a Retry is "timeout per try." Again, be aware.

Thanks to Dylan Reisenberger for his help on this post, along with Joel Hulen! Also read more about HttpClientFactory on Steve Gordon's blog and learn more about HttpClientFactory and Polly on the Polly project site.

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

HttpClientFactory for typed HttpClient instances in ASP.NET Core 2.1

Apr 20, 2018

Description:

THE HANSELMINUTES PODCASTI'm continuing to upgrade my podcast site https://www.hanselminutes.com to .NET Core 2.1 running ASP.NET Core 2.1. I'm using Razor Pages having converted my old Web Matrix Site (like 8 years old) and it's gone very smoothly. I've got a ton of blog posts queued up as I'm learning a ton. I've added Unit Testing for the Razor Pages as well as more complete Integration Testing for checking things "from the outside" like URL redirects.

My podcast has recently switched away from a custom database over to using SimpleCast and their REST API for the back end. There's a number of ways to abstract that API away as well as the HttpClient that will ultimately make the call to the SimpleCast backend. I am a fan of the Refit library for typed REST Clients and there are ways to integrate these two things but for now I'm going to use the new HttpClientFactory introduced in ASP.NET Core 2.1 by itself.

Next I'll look at implementing a Polly Handler for resilience policies to be used like Retry, WaitAndRetry, and CircuitBreaker, etc. (I blogged about Polly in 2015 - you should check it out) as it's just way too useful to not use.

HttpClient Factory lets you preconfigure named HttpClients with base addresses and default headers so you can just ask for them later by name.

public void ConfigureServices(IServiceCollection services)
{
services.AddHttpClient("SomeCustomAPI", client =>
{
client.BaseAddress = new Uri("https://someapiurl/");
client.DefaultRequestHeaders.Add("Accept", "application/json");
client.DefaultRequestHeaders.Add("User-Agent", "MyCustomUserAgent");
});
services.AddMvc();
}

Then later you ask for it and you've got less to worry about.

using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;

namespace MyApp.Controllers
{
public class HomeController : Controller
{
private readonly IHttpClientFactory _httpClientFactory;

public HomeController(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}

public Task<IActionResult> Index()
{
var client = _httpClientFactory.CreateClient("SomeCustomAPI");
return Ok(await client.GetStringAsync("/api"));
}
}
}

I prefer a TypedClient and I just add it by type in Startup.cs...just like above except:

services.AddHttpClient<SimpleCastClient>();

Note that I could put the BaseAddress in multiple places depending on if I'm calling my own API, a 3rd party, or some dev/test/staging version. I could also pull it from config:

services.AddHttpClient<SimpleCastClient>(client => client.BaseAddress = new Uri(Configuration["SimpleCastServiceUri"]));

Again, I'll look at ways to make this even simpler AND more robust (it has no retries, etc) with Polly soon.

public class SimpleCastClient
{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com"); //Could also be set in Startup.cs
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
try
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
_logger.LogWarning($"HttpClient: Loading {episodesUrl}");
var res = await _client.GetAsync(episodesUrl);
res.EnsureSuccessStatusCode();
return await res.Content.ReadAsAsync<List<Show>>();
}
catch (HttpRequestException ex)
{
_logger.LogError($"An error occurred connecting to SimpleCast API {ex.ToString()}");
throw;
}
}
}

Once I have the client I can use it from another layer, or just inject it with [FromServices] whenever I have a method that needs one:

public class IndexModel : PageModel
{
public async Task OnGetAsync([FromServices]SimpleCastClient client)
{
var shows = await client.GetShows();
}
}

Or in the constructor:

public class IndexModel : PageModel
{
private SimpleCastClient _client;

public IndexModel(SimpleCastClient Client)
{
_client = Client;
}
public async Task OnGetAsync()
{
var shows = await _client.GetShows();
}
}

Another nice side effect is that HttpClients that are created from the HttpClientFactory give me free logging:

info: System.Net.Http.ShowsClient.LogicalHandler[100]
Start processing HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
System.Net.Http.ShowsClient.LogicalHandler:Information: Start processing HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
info: System.Net.Http.ShowsClient.ClientHandler[100]
Sending HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
System.Net.Http.ShowsClient.ClientHandler:Information: Sending HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
info: System.Net.Http.ShowsClient.ClientHandler[101]
Received HTTP response after 882.8487ms - OK
System.Net.Http.ShowsClient.ClientHandler:Information: Received HTTP response after 882.8487ms - OK
info: System.Net.Http.ShowsClient.LogicalHandler[101]
End processing HTTP request after 895.3685ms - OK
System.Net.Http.ShowsClient.LogicalHandler:Information: End processing HTTP request after 895.3685ms - OK

It was super easy to move my existing code over to this model, and I'll keep simplifying AND adding other features as I learn more.

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

How to setup Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows

Apr 19, 2018

Description:

This commit was signed with a verified signature.This week in obscure blog titles, I bring you the nightmare that is setting up Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows. This is one of those "it's good for you" things like diet and exercise and setting up 2 Factor Authentication. I just want to be able to sign my code commits to GitHub so I might avoid people impersonating my Git Commits (happens more than you'd think and has happened recently.) However, I also was hoping to make it more secure by using a YubiKey 4 or Yubikey NEO security key. They're happy to tell you that it supports a BUNCH of stuff that you have never heard of like Yubico OTP, OATH-TOTP, OATH-HOTP, FIDO U2F, OpenPGP, Challenge-Response. I am most concerned with it acting like a Smart Card that holds a PGP (Pretty Good Privacy) key since the YubiKey can look like a "PIV (Personal Identity Verification) Smart Card."

NOTE: I am not a security expert. Let me know if something here is wrong (be nice) and I'll update it. Note also that there are a LOT of guides out there. Some are complete and encyclopedic, some include recommendations and details that are "too much," but this one was my experience. This isn't The Bible On The Topic but rather  what happened with me and what I ran into and how I got past it. Until this is Super Easy (TM) on Windows, there's gonna be guides like this.

As with all things security, there is a balance between Capital-S Secure with offline air-gapped what-nots, and Ease Of Use with tools like Keybase. It depends on your tolerance, patience, technical ability, and if you trust any online services. I like Keybase and trust them so I'm starting there with a Private Key. You can feel free to get/generate your key from wherever makes you happy and secure.

Welcome to Keybase.io

I use Windows and I like it, so if you want to use a Mac or Linux this blog post likely isn't for you. I love and support you and your choice though. ;)

Make sure you have a private PGP key that has your Git Commit Email Address associated with it

I download and installed (and optionally donated) a copy of Gpg4Win here.

Take your private key - either the one you got from Keybase or one you generated locally - and make sure that your UID (your email address that you use on GitHub) is a part of it. Here you can see mine is not, yet. That could be the main email or might be an alias or "uid" that you'll add.

Certs in Kleopatra

If not - as in my case since I'm using a key from keybase - you'll need to add a new uid to your private key. You will know you got it right when you run this command and see your email address inside it.

> gpg --list-secret-keys --keyid-format LONG

------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]

uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>

You can adduid in the gpg command line or you can add it in the Kleopatra GUI.

image

List them again and you'll see the added uid.

> gpg --list-secret-keys --keyid-format LONG

------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]
uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>
uid [ unknown] Scott Hanselman <scott@hanselman.com>

When you make changes like this, you can export your public key and update it in Keybase.io (again, if you're using Keybase).

image

Plugin your YubiKey

When you plug your YubiKey in (assuming it's newer than 2015) it should get auto-detected and show up like this "Yubikey NEO OTP+U2F+CCID." You want it so show up as this kind of "combo" or composite device. If it's older or not in this combo mode, you may need to download the YubiKey NEO Manager and switch modes.

Setting up a YubiKey on Windows

Test that your YubiKey can be seen as a Smart Card

Go to the command line and run this to confirm that your Yubikey can be see as a smart card by the GPG command line.

> gpg --card-status
Reader ...........: Yubico Yubikey NEO OTP U2F CCID 0
Version ..........: 2.0
....

IMPORTANT: Sometimes Windows machines and Corporate Laptops have multiple smart card readers, especially if they have Windows Hello installed like my SurfaceBook2! If you hit this, you'll want to create a text file at %appdata%\gnupg\scdaemon.conf and include a reader-port that points to your YubiKey. Mine is a NEO, yours might be a 4, etc, so be aware. You may need to reboot or at least restart/kill the GPG services/background apps for it to notice you made a change.
If you want to know what string should go in that file, go to Device Manager, then View | Show Hidden Devices and look under Software Devices. THAT is the string you want. Put this in scdaemon.conf:

reader-port "Yubico Yubikey NEO OTP+U2F+CCID 0"

Yubico Yubikey NEO OTP+U2F+CCID 0

Yubikey NEO can hold keys up to 2048 bits and the Yubikey 4 can hold up to 4096 bits - that's MOAR bits! However, you might find yourself with a 4096 bit key that is too big for the Yubikey NEO. Lots of folks believe this is a limitation of the NEO that sucks and is unacceptable. Since I'm using Keybase and starting with a 4096 bit key, one solution is to make separate 2048 bit subkeys for Authentication and Signing, etc.

From the command line, edit your keys then "addkey"

> gpg --edit-key <scott@hanselman.com>

You'll make a 2048 bit Signing key and you'll want to decide if it ever expires. If it never does, also make a revocation certificate so you can revoke it at some future point.

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all

Save your changes, and then export the keys. You can do that with Kleopatra or with the command line:

--export-secret-keys --armor KEYID

Here's a GUI view. I have my main 4096 bit key and some 2048 bit subkeys for Signing or Encryption, etc. Make as many as you like

image

LEVEL SET - It will be the public version of the 2048 bit Signing Key that we'll tell GitHub about and we'll put the private part on the YubiKey, acting as a Smart Card.

Move the signing subkey over to the YubiKey

Now I'm going to take my keychain here, select the signing one (note the ASTERISK after I type "key 1" then "keytocard" to move/store it on the YubyKey's SmartCard Signature slot. I'm using my email as a way to get to my key, but if your email is used in multiple keys you'll want to use the unique Key Id/Signature. BACK UP YOUR KEYS.

> gpg --edit-key scott@hanselman.com

gpg (GnuPG) 2.2.6; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>
gpg> toggle
gpg> key 1

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb* rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>

gpg> keytocard
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1
gpg> save

If you're storing thing on your Smart Card, it should have a pin to protect it. Also, make sure you have a backup of your primary key (if you like) because keytocard is a destructive action.

Have you set up PIN numbers for your Smart Card?

There's a PIN and an Admin PIN. The Admin PIN is the longer one. The default admin PIN is usually ‘12345678’ and the default PIN is usually ‘123456’. You'll want to set these up with either the Kleopatra GUI "Tools | Manage Smart Cards" or the gpg command line:

>gpg --card-edit
gpg/card> admin
Admin commands are allowed
gpg/card> passwd
*FOLLOW THE PROMPTS TO SET PINS, BOTH ADMIN AND STANDARD* Tell Git about your Signing Key Globally

Be sure to tell Git on your machine some important configuration info like your signing key, but also WHERE the gpg.exe is. This is important because git ships its own older local copy of gpg.exe and you installed a newer one!

git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
git config --global commit.gpgsign true
git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY

If you don't want to set ALL commits to signed, you can skip the commit.gpgsign=true and just include -S as you commit your code:

git commit -S -m your commit message Test that you can sign things

if you are running Kleopatra (the noob Windows GUI) when you run gpg --card-status you'll notice the cert will turn boldface and get marked as certified.

The goal here is for you to make sure GPG for Windows knows that there's a private key on the smart card, and associates a signing Key ID with that private key so when Git wants to sign a commit, you'll get a Smart Card PIN Prompt.

Advanced: If you make SubKeys for individual things so that they might also be later revoked without torching your main private key. Using the Kleopatra tool from GPG for Windows you can explore the keys and get their IDs. You'll use those Subkey IDs in your git config to remove to your signingkey.

At this point things should look kinda like this in the Kleopatra GUI:

Multiple PGP Sub keys

Make sure to prove you can sign something by making a text file and signing it. If you get a Smart Card prompt (assuming a YubiKey) and a larger .gpg file appears, you're cool.

> gpg --sign .\quicktest.txt
> dir quic*

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 4/18/2018 3:29 PM 9 quicktest.txt
-a---- 4/18/2018 3:38 PM 360 quicktest.txt.gpg

Now, go up into GitHub to https://github.com/settings/keys at the bottom. Remember that's GPG Keys, not SSH Keys. Make a new one and paste in your public signing key or subkey.

Note the KeyID (or the SubKey ID) and remember that one of them (either the signing one or the primary one) should be the ID you used when you set up user.signingkey in git above.

GPG Keys in GitHub

The most important thing is that:

the email address associated with the GPG Key is the same as the email address GitHub has verified for you is the same as the email in the Git Commit git config --global user.email "email@example.com"

If not, double check your email addresses and make sure they are the same everywhere.

Try a signed commit

If pressing enter pops a PIN Dialog then you're getting somewhere!

Please unlock the card

Commit and push and go over to GitHub and see if your commit is Verified or Unverified. Unverified means that the commit was signed but either had an email GitHub had never seen OR that you forgot to tell GitHub about your signing public key.

Signed Verified Git Commits

Yay!

Setting up to a second (or third) machine

Once you've told Git about your signing key and you've got your signing key stored in your YubiKey, you'll likely want to set up on another machine.

Install GPG for Windows gpg --card-status Import your public key. If I'm setting up signing on another machine, I'll can import my PUBLIC certificates like this or graphically in Kleopatra.>gpg --import "keybase public key.asc"
gpg: key *KEYID*: "keybase.io/shanselman <shanselman@keybase.io>" not changed
gpg: Total number processed: 1
gpg: unchanged: 1

You may also want to run gpg --expert --edit-key *KEYID* and type "trust" to certify your key as someone (yourself) that you trust.

Install Git (I assume you did this) and configure GPG git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe" git config --global commit.gpgsign true git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY Sign something with "gpg --sign" to test Do a test commit.

Finally, feel superior for 8 minutes, then realize you're really just lucky because you just followed the blog post of someone who ALSO has no clue, then go help a co-worker because this is TOO HARD.

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Retrogaming on original consoles in HDMI on a budget

Apr 13, 2018

Description:

Just a few of my consoles. There's a LOT off screen.My sons (10 and 12) and I have been enjoying Retrogaming as a hobby of late. Sure there's a lot of talk of 4k 60fps this and that, but there's amazing stories in classing video games. From The Legend of Zelda (all of them) to Ico and Shadow of the Colossus, we are enjoying playing games across every platform. Over the years we've assembled quite the collection of consoles, most purchased at thrift stores.

Initially I started out as a purist, wanting to play each game on the original console unmodified. I'm not a fan of emulators for a number of reasons. I don't particularly like the idea of illegal ROM come up and I'd like to support the original game creators. Additionally, if I can support a small business by purchasing original game cartridges or CDs, I prefer to do that as well. However, the kids and I have come up with somewhat of a balance in our console selection.

For example, we enjoy the Hyperkin Retron 5 in that it lets us play NES, Famicom, SNES, Super Famicom, Genesis, Mega Drive, Game Boy, Game Boy Color, & Game Boy over 5 category ports. with one additional adapter, it adds Game Gear, Master System, and Master System Cards. It uses emulators at its heart, but it requires the use of the original game cartridges. However, the Hyperkin supports all the original controllers - many of which we've found at our local thrift store - which strikes a nice balance between the old and the new. Best of all, it uses HDMI as its output plug which makes it super easy to hook up to our TV.

The prevalence of HDMI as THE standard for getting stuff onto our Living Room TV has caused me to dig into finding HDMI solutions for as many of my systems as possible. Certainly you CAN use a Composite Video Adapter to HDMI to go from the classic Yellow/White/Red connectors to HDMI but prepare for disappointment. By the time it gets to your 4k flat panel it's gonna be muddy and gross. These aren't upscalers. They can't clean an analog signal. More on that in a moment because there are LAYERS to these solutions.

Some are simple, and I recommend these (cheap products, but they work great) adapters:

Wii to HDMI Adapter - The Wii is a very under-respected console and has a TON of great games. In the US you can find a Wii at a thrift store for $20 and there's tens of millions of them out there. This simple little adapter will get you very clean 480i or 480p HDMI with audio. Combine that with the Wii's easily soft-modded operating system and you've got the potential for a multi-system emulator as well. PS2 to HDMI Adapter - This little (cheap) adapter will get you HTMI output as well, although it's converted off the component Y Cb/Pb Cr/Pr signal coming out. It also needs USB Power so you may end up leaching that off the PS2 itself. One note - even though every PS2 can also play PS1 games, those games output 240p and this adapter won't pick it up, so be prepared to downgrade depend on the game. But, if you use a Progressive Scan 16:9 Widescreen game like God of War you'll be very pleased with the result. Nintendo N64 - THIS is the most difficult console so far to get HDMI output from. There ARE solutions but they are few and far between and often out of stock. There's an RGB mod that will get you clean Red/Green/Blue outputs but not HDMI. You'll need to get the mod and then either do the soldering yourself or find a shop to do it for you. The holy grail is the UltraHDMI Mod but I have yet to find one and I'm not sure I want to pay $150 for it if I do. The cheapest and easiest thing you can and should do with an N64 is get a Composite & C-Video converter box. This box will also do basic up-scaling as well, but remember, this isn't going to create pixels that aren't already there. Dreamcast - There is an adapter from Akura that will get you all the way to HDMI but it's $85 and it's just for Dreamcast. I chose instead to use a Dreamcast to VGA cable, as the Dreamcast can do VGA natively, then a powered VGA to HDMI box. It doesn't upscale, but rather passes the original video resolution to your panel for upscaling. In my experience this is a solid budget compromise.

If you're ever in or around Portland/Beaverton, Oregon, I highly recommend you stop by Retro Game Trader. Their selection and quality is truly unmatched. One of THE great retro game stores on the west coast of the US.

The games and systems at Retro Game Trader are amazing Retro Game Trader has shelves upon shelves of classic games

For legal retrogames on a budget, I also enjoy the new "mini consoles" you've likely heard a lot about, all of which support HDMI output natively!

Super NES Classic (USA or Europe have different styles) - 21 classic games, works with HDMI, includes controllers NES Classic - Harder to find but they are out there. 30 classic games, plus controllers. Tiny! Atari Flashback 8 - 120 games, 2 controllers AND 2 paddles! C64 Mini - Includes Joystick and 64 games AND supports a USB Keyboard so you can program in C64 Basic

8bitdo-sfc30-pro-controller-gamepad-538219.4In the vein of retrogaming, but not directly related, I wanted to give a shootout to EVERYTHING that the 8BitDo company does. I have three of their controllers and they are amazing. They get constant firmware updates, and particularly the 8Bitdo SF30 Pro Controller is amazing as it works on Windows, Mac, Android, and Nintendo Switch. It pairs perfectly with the Switch, I use it on the road with my laptop as an "Xbox" style controller and it always Just Works. Amazing product.

If you want the inverse - the ability to use your favorite controllers with your Windows, Mac, or Raspberry Pi, check out their Wireless Adapter. You'll be able to pair all your controllers and use them on your PC - Xbox One S/X Bluetooth controller, PS4, PS3, Wii Mote, Wii U Pro wirelessly on Nintendo Switch with DS4 Motion and Rumble features! NOTE: I am NOT affiliated with 8BitDo at all, I just love their products.

We are having a ton of fun doing this. You'll always be on the lookout for old and classic games at swap meets, garage sales, and friends' houses. There's RetroGaming conventions and Arcades (like Ground Kontrol in Portland) and an ever-growing group of new friends and enthusiasts.

This post uses Amazon Referral Links and I'll use the few dollars I get from YOU uses them to buy retro games for the kids! Also, go subscribe to the Hanselminutes Podcast today! We're on iTunes, Spotify, Google Play, and even Twitter! Check out the episode where Matt Phillips from Tanglewood Games uses a 1995 PC to great a NEW Sega Megadrive/Genesis game in 2018!

Sponsor: Announcing Raygun APM! Now you can monitor your entire application stack, with your whole team, all in one place. Learn more!


© 2018 Scott Hanselman. All rights reserved.
     

Updating jQuery-based Lazy Image Loading to IntersectionObserver

Apr 11, 2018

Description:

The Hanselminutes Tech PodcastFive years ago I implemented "lazy loading" of the 600+ images on my podcast's archives page (I don't like paging, as a rule) over here https://www.hanselminutes.com/episodes. I did it with jQuery and a jQuery Plugin. It was kind of messy and gross from a purist's perspective, but it totally worked and has easily saved me (and you) hundreds of dollars in bandwidth over the years. The page is like 9 or 10 megs if you load 600 images, not to mention you're loading 600 freaking images.

Fast-forward to 2018, and there's the "Intersection Observer API" that's supported everywhere but Safari and IE, well, because, Safari and IE, sigh. We will return to that issue in a moment.

Following Dean Hume's blog post on the topic, I start with my images like this. I don't populate src="", but instead hold the Image URL in the HTML5 data- bucket of data-src. For src, I can use the nothing grey.gif or just style and color the image grey.

<a href="/626/christine-spangs-open-source-journey-from-teen-oss-contributor-to-cto-of-nylas" class="showCard"> <img data-src="https://images.hanselminutes.com/images/626.jpg" class="lazy" src="/images/grey.gif" width="212" height="212" alt="Christine Spang&#x27;s Open Source Journey from Teen OSS Contributor to CTO of Nylas" /> <span class="shownumber">626</span> <div class="overlay title">Christine Spang&#x27;s Open Source Journey from Teen OSS Contributor to CTO of Nylas</div> </a> <a href="/625/a-new-sega-megadrivegenesis-game-in-2018-with-1995-tools-with-tanglewoods-matt-phillips" class="showCard"> <img data-src="https://images.hanselminutes.com/images/625.jpg" class="lazy" src="/images/grey.gif" width="212" height="212" alt="A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood&#x27;s Matt Phillips" /> <span class="shownumber">625</span> <div class="overlay title">A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood&#x27;s Matt Phillips</div> </a>

Then, if the images get within 50px intersecting the viewPort (I'm scrolling down) then I load them:

// Get images of class lazy const images = document.querySelectorAll('.lazy'); const config = { // If image gets within 50px go get it rootMargin: '50px 0px', threshold: 0.01 }; let observer = new IntersectionObserver(onIntersection, config); images.forEach(image => { observer.observe(image); });

Now that we are watching it, we need to do something when it's observed.

function onIntersection(entries) { // Loop through the entries entries.forEach(entry => { // Are we in viewport? if (entry.intersectionRatio > 0) { // Stop watching and load the image observer.unobserve(entry.target); preloadImage(entry.target); } }); }

If the browser (IE, Safari, Mobile Safari) doesn't support IntersectionObserver, we can do a few things. I *could* fall back to my old jQuery technique, although it would involve loading a bunch of extra scripts for those browsers, or I could just load all the images in a loop, regardless, like:

if (!('IntersectionObserver' in window)) { loadImagesImmediately(images); } else {...}

Dean's examples are all "Vanilla JS" and require no jQuery, no plugins, no polyfills WITH browser support. There are also some IntersectionObserver helper libraries out there like Cory Dowdy's IOLazy. Cory's is a nice simple wrapper and is super easy to implement. Given I want to support iOS Safari as well, I am using a polyfill to get the support I want from browsers that don't have it natively.

<!-- intersection observer polyfill --> <script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=IntersectionObserver"></script>

Polyfill.io is a lovely site that gives you just the fills you need (or those you need AND request) tailored to your browser. Try GETting the URL above in Chrome. You'll see it's basically empty as you don't need it. Then hit it in IE, and you'll get the polyfill. The official IntersectionObserver polyfill is at the w3c.

At this point I've removed jQuery entirely from my site and I'm just using an optional polyfill plus browser support that didn't exist when I started my podcast site. Fewer moving parts means a cleaner, leaner, simpler site!

Go subscribe to the Hanselminutes Podcast today! We're on iTunes, Spotify, Google Play, and even Twitter!

Sponsor: Announcing Raygun APM! Now you can monitor your entire application stack, with your whole team, all in one place. Learn more!


© 2018 Scott Hanselman. All rights reserved.
     

Optimizing an ASP.NET Core site with Chrome's Lighthouse Auditor

Apr 6, 2018

Description:

I'm continuing to update my podcast site. I've upgraded it from ASP.NET "Web Pages" (10 year old code written in WebMatrix) to ASP.NET Core 2.1 developed with VS Code. Here's some recent posts:

Upgrading my podcast site to ASP.NET Core 2.1 in Azure plus some Best Practices Easier functional and integration testing of ASP.NET Core applications Automatic Unit Testing in .NET Core plus Code Coverage in Visual Studio Code Setting up Application Insights took 10 minutes. It created two days of work for me. Major build speed improvements - Try .NET Core 2.1 Preview 1 today

I was talking with Ire Aderinokun today for an upcoming podcast episode and she mentioned I should use Lighthouse (It's built into Chrome, can be run as an extension, or run from the command line) to optimize my podcast site. I, frankly, had not looked at that part of Chrome in a long time and was shocked and how powerful it was!

Performance 73, PWA 55, Accessbiolity 68, Best Practices 81, SEO 78

Lighthouse also told me that I was using an old version of jQuery (I know that) that had known security issues (I didn't know that!)

It told me about Accessibility issues as well, pointing out that some of my links were not discernable to a screen reader.

Some of these issues were/are easily fixed in minutes. I think I spent about 20 minutes fixing up some links, compressing a few images, and generally "tidying up" in ways that I knew wouldn't/shouldn't break my site. Those few minutes took my Accessibility and Best Practices score up measurably, but I clearly have some work to do around Performance. I never even considered my Podcast Site as a potential Progressive Web App (PWA) but now that I have a new podcast host and a nice embedded player, that may be a possibility for the future!

Performance 73, PWA 55, Accessbiolity 85, Best Practices 88, SEO 78

My largest issue is with my (aging) CSS. I'd like to convert the site to use FlexBox or a CSS Grid as well as fixed up my Time to First Meaningful Paint.

I went and updated my Archives page a while back with Lazy Image loading, but it was using jQuery and some older (4+ year old) techniques. I'll revisit those with modern techniques AND apply them to the grid of 16 shows on the site's home page as well.

There are opportunities to speed up my application using offscreen images

I have only just begun but I'll report back as I speed things up!

What tools do YOU use to audit your websites?

Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.


© 2018 Scott Hanselman. All rights reserved.