Scott Hanselman's Blog

Scott Hanselman's Blog

Description

Scott Hanselman on Programming, User Experience, The Zen of Computers and Life in General

Link: www.hanselman.com/blog/

Episodes

Review: UniFi from Ubiquiti Networking is the ultimate prosumer home networking solution

Aug 20, 2019

Description:

UniFi mapI LOVE my Amplifi Wi-Fi Mesh Network. I've had it for two years and it's been an absolute star performer. We haven't had a single issue. Rock solid. That's really saying something. From unboxing to installation to running it (working from home for a tech company, so you know I'm pushing this system) it's been totally stable. I recommend Amplifi unreservedly to any consumer or low-key pro-sumer who has been frustrated with their existing centrally located router giving them reliable wi-fi everywhere in their home.

That said...I recently upgraded my home internet service provider. For the last 10 years I've had fiber optic to the house with 35 Mbp/s up/down and it's been great. Then I called them a a few years back and got 100/100. The whole house was presciently wired by me for Gigabit back in 2007 (!) with a nice wiring closet and everything. Lately 100/100 hasn't been really cutting it when I'm updating a dozen laptops for a work event, copying a VM to the cloud while my spouse is watching 4k netflix and two boys are updating App Store apps. You get the idea. Modern bandwidth requirements and life has changed since 2007. We've got over 40 devices on the network now and many are doing real work.

I called an changed providers to a cable provider that offered true gigabit. However, I was rarely getting over 300-400 Mbp/s on my Amplifi. There is a "hardware NAT" option that really helps, but short of running the Amplifi in Bridged Mode and losing a lot of its epic features, it was clear that I was outgrowing this prosumer device.

Give I'm a professional working at home doing stuff that is more than the average Joe or Jane, what's a professional option?

UniFi from Ubiquiti

Amplifi is the consumer/prosumer line from Ubiquiti Networks and UniFi (UBNT) is the professional line.  You'll literally find these installed at business or even sports stadiums. This is serious gear.

Let me be honest. I knew UniFi existed. Knew (I thought) all about it and I resisted. My friends and fellow nerds insisted it was easy but I kept seeing massive complex network diagrams and convinced myself it wasn't worth the hassle.

My friends, I was wrong. It's not hard. If you are doing business at home, have a gigabit network pipe, a wired home network, and/or have a dozen or more network devices, you're a serious internet person and you might want to consider serious internet networking gear.

Everything is GREAT

Now, UniFi is more expensive than Amplifi as it's pro gear. While an Amplifi Mesh WiFi system is just about $300-350 USD, UniFi Pro gear will cost more and you'll need stuff to start out and it won't always feel intuitive as you plan your system. It is worth it and I'm thrilled with the result. The flexibility and customizability its offered has been epic. There are literally no internet issues in our house or property anymore. I've even been able to add wired and wireless non-cloud-based security cameras throughout the property. Additionally, remember how the house is already wired in nearly every room with Cat6 (or Cat5e) cabling? UniFi has reintroduced me to the glorious world of PoE+ (Power over Ethernet) and removed a half dozen AC wall plugs from my system.

Plan your Network

You can test out the web-based software yourself LIVE at https://demo.ui.com and see what managing a large network would be like. Check out their map of the FedEx Forum Stadium and how they get full coverage. You can see a simulated map of my house (not really my house) in the screenshot above. When you set up a controller you can place physical devices (ones you have) and test out virtual devices (ones you are thinking of buying) and see what they would look like on a real map of your home (supplied by you). You can even draw 3D walls and describe their material (brick, glass, steel) and their dB signal loss.

UniFi.beginner.950

When you are moving to UniFi you'll need:

USG - UniFi Security Gateway - This has 3 gigabit points and has a WAN port for your external network (plug your router into this) and a LAN port for your internal network (plug your internal switch into this). This is the part that doles out DHCP. UniFi Cloud Key or Cloud Key Gen2 Plus It's not intuitive what the USG does vs the Cloud Key but you need both. I got the Gen2 because it includes a 1TB hard drive that allows me to store my security video locally. It also is itself a PoE client so I don't need to plug it into the wall. I just wired it with a single Ethernet cable to the PoE switch below and left it in the wiring closet. There's a smaller cheaper Cloud Key if you don't need a hard drive. You don't technically need a Cloud Key I believe, as all the UniFi Controller Software is free and you can run it in on any machine you have laying around. Folks have run them on any Linux or Windows machine they have, or even on a Synology or other NAS. I like the idea of having it "just work" so I got the Cloud Key. UniFi Switch (of some kind and number of ports) 8 port 150 watt UniFi Switch 24 port UniFi Switch - 24 ports may be overkill for most but it's only 8 lbs and will handle even the largest home network. And it's under $200 USD right now on Amazon 24 port UniFi Switch with PoE - I got this one because it has 250W of PoE power. If you aren't interested in power over ethernet you can save money with the non-PoE version or a 16 port version but I REALLY REALLY recommend you use PoE because the APs work better with it.
PoE switch showing usage on many ports

Now once you've got the administrative infrastructure above, you just need to add whatever UniFi APs - access points - and/or optional cameras that you want!

NOTE/TIP - A brilliant product from Ubiquiti that I think is flying under the radar is the Unifi G3 Flex PoE camera. It's just $75 and it's tiny but it's absolutely brilliant. Full 1080p video and night vision. I'll talk about the magic of PoE later on but you can just plug this in anywhere in the house - no AC adapter - and you've got a crystal clear security camera or cameras anywhere in the house. They are all powered from the PoE switch!

I had a basic networking closet I put the USG Gateway into the closet with a patch cable to the cable modem (the DOCSIS 3.1 cable modem that I bought because I got tired of renting it from the service provider) then added the Switch with PoE, and plugged the Cloud Key into it. Admin done.

Here's the lovely part.

Since I have cable throughout the house, I can just plug in the UniFi Access Points in various room and they get power immediately. I can try different configs and test the signal strength. I found the perfect config after about 4 days of moving things around and testing on the interactive map. The first try was fine but I strove for perfect.

There's lots of UniFi Access Points to choose from. The dual radio Pro version can get pretty expensive if you have a lot so I got the Lite PoE AP. You can also get a 5 pack of the nanoHD UniFi Access Points.

These Access Points are often mounted in the ceiling in pro installations, and in a few spots I really wanted something more subtle AND I could use a few extra Ethernet ports. Since I already had an Ethernet port in the wall, I could just wall mount the UniFi Wall Mounted AP. It's both a wireless AP that radiates outward into the room AND it turns your one port into two, or you can get one that becomes a switch with more ports and extends your PoE abilities. So I can add this to a room, plug a few devices in AND a PoE powered Camera with no wall-warts or AC adapters!

NOTE: I did need to add a new ethernet RJ45 connector to plug into the female connector of the UniFi in-wall AP. Just be sure to plan and take inventory. You may already have full cables with connectors pulled to your rooms. Be aware.

There are a TON of great Wireless AP options from UniFi so make sure you explore them all and understand what you want.

In-Wall AP

Here's the resulting setup and choices I made, as viewed in the UniFi Controller Software:

List of Ubnt devices

I have the Gateway, the Switch with PoE, and five APs. Three are the disc APs and two are in-wall APs. They absolutely cover and manage my entire two story house and yards front and back. It's made it super easy for me to work from home and be able to work effectively from any room. My kids and family haven't had any issues with any tablets or phones.

As of the time of these writing I have 27 wireless devices on the system and 11 wired (at least those are the ones that are doing stuff at this hour).

My devices as viewed in the UniFi controller

Note how it will tell you how each device's WiFi experience is. I use this Experience information to help me manage the network and see if the APs are appropriately placed. There is a TON of great statistics and charts and graphics. It's info-rich to say the LEAST.

NOTE: To answer a common question - In an installation like this you've got a single SSID even though there's lots of APs and your devices will quietly and automatically roam between them!
Log showing roaming between APs

The iPhone app is very full-featured as well and when you've got deep packet introspection turn on you can see a ton of statistical information at the price of a smidge of throughput performance.

iPhone StatsiPhone Bandwidth

I have had NO problem hitting 800-950Mbs over wired and I feel like there's no real limit to the perf of this system. I've done game streaming over Steam and Xbox game streaming for hours without a hiccup. Netflix doesn't buffer anymore, even on the back porch.

a lot of bandwidth with no drops

You can auto-optimize, or you can turn off a plethora of feature and manage everything manually. I was able to twitch a few APs to run their 2.4Ghz Wi-Fi radios on less crowded channels in order to get out of the way of the loud neighbors on channel 11.

I have a ton of control over the network now, unlimited expandability and it has been a fantastically stable network. All the APs are wire backed and the wireless bandwidth is rock solid. I've been extremely impressed with the clean roaming from room to room while streaming from Netflix. It's a tweakers (ahem) dream network.

* I use Amazon referral links and donate the little money to my kids' school. You support charter schools when you use these links.

Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.


© 2019 Scott Hanselman. All rights reserved.
     

SharpScript from ServiceStack lets you run .NET apps directly from a GitHub Gist!

Aug 15, 2019

Description:

I've blogged about ServiceStack before. It's an extraordinary open source project - an ecosystem of its own even - that is designed to be an alternative to the WCF, ASP.NET MVC, and ASP.NET Web API frameworks. I enjoy it so much I even helped write its tagline "Thoughtfully architected, obscenely fast, thoroughly enjoyable web services for all"

ServiceStack is an easy drop-in that simplifies creating Web Services in any ASP.NET Web App, but also in Self Hosting Console Apps, Windows Services and even Windows and OSX Desktop Apps - supporting both .NET Framework and .NET Core. The easiest way to get started is to create a new project from a ServiceStack VS.NET Template.

ServiceStack has released a new and amazing project that is absolutely audacious in its scope and elegant in its integration with the open source .NET Core ecosystem - #Script (pronounced "sharp script.")

Scripts IN your app!

There are a number of .NET projects that simulate REPL's or allow basic scripting, like "dotnet script" as an example or ScriptCS but I'm deeply impressed with #Script. To start with, #Script is somewhat better suited for scripting than Razor and it doesn't require precompilation. #Script is appropriate for live documents or Email Templates for example.

Here's a basic example of embedding a ScriptContext in your app:

var context = new ScriptContext().Init();
var output = context.EvaluateScript("Time is now: {{ now | dateFormat('HH:mm:ss') }}");

Where ServiceStack's #Script really shines is its use of .NET Core Global Tools. They've nabbed two global tool names - web and app (sassy!) and allow one to create SharpApps. From their site:

Sharp Apps leverages #Script to develop entire content-rich, data-driven websites without needing to write any C#, compile projects or manually refresh pages - resulting in the easiest and fastest way to develop Web Apps in .NET!

The web tool is cross platform and the app global tool is great for Windows as it supports .NET Core Windows Desktop Apps.

Your app IS a script!

You can write interactive SharpScripts or SharpApps that uses Chromium as a host.

You can literally run a "desktop" app self contained from a GitHub Gist!

Sharp Apps can also be published to Gists where they can be run on-the-fly without installation, they're always up-to-date, have tiny footprints are fast to download and launch that can also run locally, off-line and cross-platform across Windows, macOS and Linux OS's.

There's also a "gallery" that maps short names to existing examples. So run "app open" to get a list, then "app open name" to run one. You can just "app open blog" and you're running a quick local blog.

SharpApps

Easy to develop and run

The global tools make SharpApp a complete dev and runtime experience because you can just run "app" in the source folder and as you make code changes the hot-reloader updates the site as you Ctrl-S (save) a file!

If you've got .NET Core SDK installed (it's super quick) then just grab the local tool here (app on Windows and web anywhere else):

dotnet tool install --global app

And if you have a existing .NET Core web app you can launch it and run it in a Chromium Embedded Framework (CEF) browser with "app foo.dll" Check out this example on how to make and run a .NET Core app on the Windows Desktop with #Script.

ServiceStack CEF App

Then you can make a shortcut and add it to to the desktop with

app shortcut Acme.dll

Slick!

Code in #Script is done in markdown ```code blocks, while in Razor it's @{ } but it does use mustache template style. Go try out some of their Starter Projects!

#Script and SharpApps is an extraordinary addition to the .NET Core ecosystem and I'm just touching the surface. Do check out their site at https://sharpscript.net.

What do you think?

Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today!


© 2019 Scott Hanselman. All rights reserved.
     

I miss Microsoft Encarta

Aug 13, 2019

Description:

imageMicrosoft Encarta came out in 1993 and was one of the first CD-ROMs I had. It stopped shipping in 2009 on DVD. I recently found a disk and was impressed that it installed just perfectly on my latest Window 10 machine and runs nicely.

Encarta existed in an interesting place between the rise of the internet and computer's ability to deal with (at the time) massive amounts of data. CD-ROMs could bring us 700 MEGABYTES which was unbelievable when compared to the 1.44MB (or even 120KB) floppy disks we were used to. The idea that Encarta was so large that it was 5 CD-ROMs (!) was staggering, even though that's just a few gigs today. Even a $5 USB stick could hold Encarta - twice!

My kids can't possibly intellectualize the scale that data exists in today. We could barely believe that a whole bookshelf of Encyclopedias was now in our pockets. I spent hours and hours just wandering around random articles in Encarta. The scope of knowledge was overwhelming, but accessible. But it was contained - it was bounded. Today, my kids just assume that the sum of all human knowledge is available with a single search or a "hey Alexa" so the world's mysteries are less mysterious and they become bored by the Paradox of Choice.

image

In a world of 4k streaming video, global wireless, and high-speed everything, there's really no analog to the feeling we got watching the Moon Landing as a video in Encarta - short of watching it live on TV in 1969! For most of us, this was the first time we'd ever seen full-motion video on-demand on a computer in any sort of fidelity - and these are mostly 320x240 or smaller videos!

First Steps on the Moon

A generation of us grew up hearing MLK's "I have a dream" speech inside Microsoft Encarta!

MLK I have a Dream

Remember the Encarta "So, you wanna play some Basketball" Video?

LeBron James from 2003

Amazed by Google Earth? You never saw the globe in Encarta.

Globe in Encarta

You'll be perhaps surprised to hear that the Encarta Timeline works even today on across THREE 4k monitors at nearly 10,000 pixels across! This was a product that was written over 10 years ago and could never have conceived of that many pixels. It works great!

The Encarta Timeline across 3 4k monitors

Most folks at Microsoft don't realize that Encarta exists and is used TODAY all over the developing world on disconnected or occasionally connected computers. (Perhaps Microsoft could make the final version of Encarta available for a free final download so that we might avoid downloading illegal or malware infested versions?)

What are your fond memories of Encarta? If you're not of the Encarta generation, what's your impression of it? Had you heard or thought of it?

Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today!


© 2019 Scott Hanselman. All rights reserved.
     

The PICO-8 Virtual Fantasy Console is an idealized constrained modern day game maker

Aug 8, 2019

Description:

Animated GIF of PICO-8I love everything about PICO-8. It's a fantasy gaming console that wants you - and the kids in your life and everyone you know - to make games!

How cool is that?

You know the game Celeste? It's available on every platform, has one every award and is generally considered a modern-day classic. Well the first version was made on PICO-8 in 4 days as a hackathon project and you can play it here online. Here's the link when they launched in 4 years ago on the forums. They pushed the limits, as they call out "We used pretty much all our resources for this. 8186/8192 code, the entire spritemap, the entire map, and 63/64 sounds." How far could one go? Wolf3D even?

"A fantasy console is like a regular console, but without the inconvenience of actual hardware. PICO-8 has everything else that makes a console a console: machine specifications and display format, development tools, design culture, distribution platform, community and playership. It is similar to a retro game emulator, but for a machine that never existed. PICO-8's specifications and ecosystem are instead designed from scratch to produce something that has it's own identity and feels real. Instead of physical cartridges, programs made for PICO-8 are distributed on .png images that look like cartridges, complete with labels and a fixed 32k data capacity."

What a great start and great proof that you can make an amazing game in a small space. If you loved GameBoys and have fond memories of GBA and other small games, you'll love PICO-8.

How to play PICO-8 cartridges

Demon CastleIf you just want to explore, you can go to https://www.lexaloffle.com and just play in your browser! PICO-8 is a "fantasy console" that doesn't exist physically (unless you build one, more on that later). If you want to develop cartridges and play locally, you can buy the whole system (any platform) for $14.99, which I have.

If you have Windows and Chrome or New Edge you can just plug in your Xbox Controller with a micro-USB cable and visit https://www.lexaloffle.com/pico-8.php and start playing now! It's amazing - yes I know how it works but it's still amazing - to me to be able to play a game in a web browser using a game controller. I guess I'm easily impressed.

It wasn't very clear to me how to load and play any cartridge LOCALLY. For example, I can play Demon Castle here on the Forums but how do I play it locally and later, offline?

The easy way is to run PICO-8 and hit ESC to get their command line. Then I type LOAD #cartid where #cartid is literally the id of the cartridge on the forums. In the case of Demon Castle it's #demon_castle-0 so I can just LOAD #demon_castle-0 followed by RUN.

Alternatively - and this is just lovely - if I see the PNG pic of the cartridge on a web page, I can just save that PNG locally and save it in C:\Users\scott\AppData\Roaming\pico-8\carts then run it with LOAD demon_castle-0 (or I can include the full filename with extensions). THAT PNG ABOVE IS THE ACTUAL GAME AS WELL. What a clever thing - a true virtual cartridge.

One of the many genius parts of the PICO-8 is that the "Cartridges" are actually PNG pictures of cartridges. Drink that in for a second. They save a screenshot of the game while the cart is running, then they hide the actual code in a steganographic process - they are hiding the code in two of the bits of the color channels! Since the cart pics are 160*205 there's enough room for 32k.

A p8 file is source code and a p8.png is the compiled cart!

How to make PICO-8 games

The PICO-8 software includes everything you need - consciously constrained - to make AND play games. You hit ESC to move between the game and the game designer. It includes a sprite and music editor as well.

From their site, the specifications are TIGHT on purpose because constraints are fun. When I write for the PalmPilot back in the 90s I had just 4k of heap and it was the most fun I've had in years.

Display - 128x128 16 colours Cartridge Size - 32k Sound - 4 channel chip blerps Code - Lua Sprites - 256 8x8 sprites Map - 128x32 cels

"The harsh limitations of PICO-8 are carefully chosen to be fun to work with, to encourage small but expressive designs, and to give cartridges made with PICO-8 their own particular look and feel."

The code you will use is LUA. Here's some demo code of a Hello World that animates 11 sprites and includes two lines of text

t = 0
music(0) -- play music from pattern 0

function _draw()
cls()
for i=1,11 do -- for each letter
for j=0,7 do -- for each rainbow trail part
t1 = t + i*4 - j*2 -- adjusted time
y = 45-j + cos(t1/50)*5 -- vertical position
pal(7, 14-j) -- remap colour from white
spr(16+i, 8+i*8, y) -- draw letter sprite
end
end

print("this is pico-8", 37, 70, 14)
print("nice to meet you", 34, 80, 12)
spr(1, 64-4, 90) -- draw heart sprite
t += 1
end

That's just a simple example, there's a huge forum with thousands of games and lots of folks happy to help you in this new world of game creation with the PICO-8. Here's a wonderful PICO-8 Cheat Sheet to print out with a list of functions and concepts. Maybe set it as your wallpaper while developing? There's a detailed User Manual and a 72 page PICO-8 Zine PDF which is really impressive!

And finally, be sure to bookmark this GitHub hosted amazing curated list of PICO-8 resources! https://github.com/pico-8/awesome-PICO-8

image image

Writing PICO-8 Code in another Editor

There is a 3 year old PICO-8 extension for Visual Studio Code that is a decent start, although it's created assuming a Mac, so if you are a Windows user, you will need to change the Keyboard Shortcuts to something like "Ctrl-Shift-Alt-R" to run cartridges. There's no debugger that I'm seeing. In an ideal world we'd use launch.json and have a registered PICO-8 type and that would make launching after changing code a lot clearer.

There is a more recent "pico8vscodeditor" extension by Steve Robbins that includes snippets for loops and some snippets for the Pico-8 API. I recommend this newer fleshed out extension - kudos Steve! Be sure to include the full path to your PICO-8 executable, and note that the hotkey to run is a chord, starting with "Ctrl-8" then "R."

Telling VS-Code about PICO-8

Editing code directly in the PICO-8 application is totally possible and you can truly develop an entire cart in there, but if you do, you're a better person than I. Here's a directory listing in VSCode on the left and PICO-8 on the right.

Directories in PICO-8

And some code.

Editing Pico-8 code

You can expert to HTML5 as well as binaries for Windows, Mac, and Linux. It's a full game maker! There are also other game systems out there like PicoLove that take PICO-8 in different directions and those are worth knowing about as well.

What about a physical PICO-8 Console

A number of folks have talked about the ultimate portable handheld PICO-8 device. I have done a lot of spelunking and as of this writing it doesn't exist.

You could get a Raspberry Pi Zero and put this Waveshare LCD hat on top. The screen is perfect. But the joystick and buttons...just aren't. There's also no sound by default. But $14 is a good start. The Tiny GamePi15, also from Waveshare could be good with decent buttons but it has a 240x240 screen. The full sized Game Hat looks promising and has a large 480x320 screen so you could play PICO-8 at a scaled 256x256. The RetroStone is also close but you're truly on your own, compiling drivers yourself (twitter thread) from what I can gather The ClockworkPI GameShell is SOOOO close but the screen is 320x240 which makes 128x128 an awkward scaled mess with aliasing, and the screen the Clockwork folks chose doesn't have a true grid if pixels. Their pixels are staggered. Hopefully they'll offer an alternative module one day, then this would truly be the perfect device. There are clear instructions on how to get going. The PocketCHIP has a great screen but a nightmare input keyboard.

For now, any PC, Laptop, or Rasberry Pi with a proper setup will do just fine for you to explore the PICO-8 and the world of fantasy consoles!

Sponsor: OzCode is a magical debugging extension for C#/.NET devs working in Visual Studio. Get to the root cause of your bugs faster with heads-up display, advanced search inside objects, LINQ query debugging, side-by-side object comparisons & more. Try for free!
© 2019 Scott Hanselman. All rights reserved.
     

Good, Better, Best - creating the ultimate remote worker webcam setup on a budget

Aug 6, 2019

Description:

I've been a remote worker and an occasional YouTuber for well over a decade. I'm always looking for a better setup because the goal is clear - how can I interact with you and my co-workers in a way that has high-enough fidelity that I don't need to drive to Seattle every week!

I believe if my camera is clear and my audio is clear than I can really have a remote relationship with my team that is effective and true.

Everyone has a webcam these days and can just get on a video call and have a chat - but is it of sufficient quality that you feel like you're really having a good conversation with folks and truly connecting!

Here's a shot of my setup during a meeting I'm in here at Microsoft:

My setup - webcam and camera

Here's my thoughts on Good, Better, add Best set-ups for remotes and YouTubers without spending thousands.

Good High quality video for Webcams and RemotesThe Logitech C270 Webcam can be gotten for as little as $20 or less! It's wholly adequate with enough light. It only does 720p and it's USB2 so I can't enthusiastically recommend it but it's OK again, if you throw light at it. In the dark is just a webcam.
Logitech C270 The Logitech USB Headset H570 is decent, as is the lovely Jabra UC Voice corded headset. I prefer the Jabra because it only covers one ear and doesn't give me the "two covered ears" claustrophobic feeling. To be clear - audio quality matters. Any crappy headset (or quality one as above) will ALWAYS be better than your webcam's default or your laptop's default. Always. Mics need to be closer to your mouth to sound good.
31kFcFWgIL Small webcam Ringlight. Light light light. Webcams, especially cheap ones NEED LIGHT. It feels weird and I get it but the quality is SO MUCH BETTER with some decent fill light. Get a ring light that's powered by USB and use it on calls. Yes, it looks ridiculous but it WORKS.
Ringlight Better

Logitech BrioHow can we improve on the GOOD setup. Clearer videos and better sound/sound feel.

Some folks feel the Logitech Brio is overhyped and I think that's fair. It's a "4k" camera that's not as impressive as it should be. That said, it's a solid camera and arguably the best Logitech has to offer.

If I could suggest a middle of the road solid "BETTER" setup for a remote worker, I'd recommend these

Logitech Brio - solid 1080p 30fps Logtitech USB Headset LED Light ring

The lights are the magic.

Now, moving beyond USB headsets, I love adding speakerphones - not for the mic, literally for the speaker. I love the Plantronics Portable USB Speakerphone. Requires no drivers, it just shows up as a mic and speaker automatically. I have it front and center in front of my monitor and I use it every day. It makes me feel like my Home Office is a real Office somehow.

Plantronics Calisto 610-M Portable USB Speakerphone

If conversations are private I'll use the headset above for the audio but when I want the sound to "come from the monitor" I'll SPLIT the audio. This is a pro tip. You can set up the Mic input as the headset mic and the Speaker output as a Speakerphone (or your main speakers). I like using the Speakerphone for voice and keeping the computer's output as the main speakers. Having this separate of voice and computer sounds is a small trick I play on myself but it helps to create a sense of location where the remote video person comes out of separate speakers.

Selecting Output Speakers

Best

Let's spend a little bit of money, but not so much that we break the bank.

I'm going to make my own webcam. Rather than a plastic of the shelf single webcam, let's take an actual mirrorless camera - the kind you'd take to a photography class - and make it a HIGH QUALITY webcam.

We need a great camera and it needs to support HDMI out. The camera also needs to be able to stay on all day long, not overheat, and it needs to run on AC power (not on battery).

Here's a list of cameras that have clean HDMI out and can stay on all day. You might have one of these cameras in your closet! I like the Sony A6000 and here's its characteristics.

Sony A6000 - I found this on Craigslist for $300. Max resolution: 1080p and a buttery smooth 60fps Clean HDMI: Yes Unlimited runtime: Yes Connection type: Micro HDMI Power: Dummy Battery Verified by: Elgato Notes: Requires dummy battery for power (sold separately) Retains full autofocus with clean HDMI output
Sony A6000 I need a "dummy battery" for this camera. Turns out this is a whole class of thing you can buy. Who knew? This camera has micro-HDMI so I need a micro-HDMI to HDMI cable. Now this is just a loose camera, so how I will mount it on my monitor? I like mounting it INSIDE the Ring Light. If you don't want the light you can just get this clamp mount. Or you can do what I did - get the CLAMP then the LIGHT and then put the CAMERA in that like a sandwic
Flexible Jaws Clamp Clip Mount Holder This camera and cameras like it output HDMI and I need that HDMI to be inputted into my computer and I want the HDMI output of the camera to look like it's a regular Webcam. The magical device that does this for us is the Elgato CamLink 4k. It's literally a little stick with HDMI input on one end and a USB3 on the other side. It took 5 minutes to install. This device also has the added benefit of being a generic "capture card" if you want to record or broadcast your gaming consoles OR other computers!

Elgato CamLink 4k

Here's a YouTube video I made that shows you these cameras, before and after - Good, Better, and BEST!

Logitech Brio vs Sony A6000 with Elgato Camlink 4k - No Contest

What do you think? Thanks to John Miller and Jeff Fritz for their help and guidance!

* I use Amazon referral links and donate the little money to my kids' school. You support charter schools when you use these links.

Sponsor: OzCode is a magical debugging extension for C#/.NET devs working in Visual Studio. Get to the root cause of your bugs faster with heads-up display, advanced search inside objects, LINQ query debugging, side-by-side object comparisons & more. Try for free!


© 2019 Scott Hanselman. All rights reserved.
     

Dotnet Depends is a great text mode development utility made with Gui.cs

Aug 1, 2019

Description:

I love me some text mode. ASCII, ANSI, VT100. Keep your 3D accelerated ray traced graphics and give me a lovely emoji-based progress bar.

Miguel has a nice thing called Gui.cs and I bumped into it in an unexpected and lovely place. There are hundreds of great .NET Global Tools that you can install to make your development lifecycle smoother, and I was installing Martin Björkström's lovely "dotnet depends" tool (go give him a GitHub star now!)  like this:

dotnet tool install -g dotnet-depends

Then I headed over to my Windows Terminal (get it free in the Store) and ran "dotnet depends" on my main website's code and was greeted by this (don't sweat the line spacing, that's a Terminal bug that'll be fixed soon):

dotnet depends in the Windows Terminal

How nice is this! It's a fully featured dependency explorer but it's all in text mode and doesn't require me to use the mouse and take my hands of the keyboard. If I'm already deep into the terminal/text mode, this is a great example of a solid, useful tool.

But how hard was it to make? Surprisingly little as his code is very simple. This is a testament to how he used the API and how Miguel designed it. He's separated the UI and the Business Logic, of course. He does the analysis work and stores it in a graph variable.

Here they're setting up some panes for the (text mode) Windows:

Application.Init();

var top = new CustomWindow();

var left = new FrameView("Dependencies")
{
Width = Dim.Percent(50),
Height = Dim.Fill(1)
};
var right = new View()
{
X = Pos.Right(left),
Width = Dim.Fill(),
Height = Dim.Fill(1)
};

It's split in half at this point, with the left side staying  at 50%.

var orderedDependencyList = graph.Nodes.OrderBy(x => x.Id).ToImmutableList();
var dependenciesView = new ListView(orderedDependencyList)
{
CanFocus = true,
AllowsMarking = false
};
left.Add(dependenciesView);
var runtimeDependsView = new ListView(Array.Empty<Node>())
{
CanFocus = true,
AllowsMarking = false
};
runtimeDepends.Add(runtimeDependsView);
var packageDependsView = new ListView(Array.Empty<Node>())
{
CanFocus = true,
AllowsMarking = false
};
packageDepends.Add(packageDependsView);
var reverseDependsView = new ListView(Array.Empty<Node>())
{
CanFocus = true,
AllowsMarking = false
};
reverseDepends.Add(reverseDependsView);

right.Add(runtimeDepends, packageDepends, reverseDepends);
top.Add(left, right, helpText);
Application.Top.Add(top)

The right side gets three ListViews added to it and the left side gets the dependencies view. Top it off with some clean data binding to the views and an initial call to UpdateLists. Anytime the dependenciesView gets a SelectedChanged event we'll call UpdateLists again.

top.Dependencies = orderedDependencyList;
top.VisibleDependencies = orderedDependencyList;
top.DependenciesView = dependenciesView;

dependenciesView.SelectedItem = 0;
UpdateLists();

dependenciesView.SelectedChanged += UpdateLists;

Application.Run();

What's in update lists? Filtering code for that graph variable from before.

void UpdateLists()
{
var selectedNode = top.VisibleDependencies[dependenciesView.SelectedItem];

runtimeDependsView.SetSource(graph.Edges.Where(x => x.Start.Equals(selectedNode) && x.End is AssemblyReferenceNode)
.Select(x => x.End).ToImmutableList());
packageDependsView.SetSource(graph.Edges.Where(x => x.Start.Equals(selectedNode) && x.End is PackageReferenceNode)
.Select(x => $"{x.End}{(string.IsNullOrEmpty(x.Label) ? string.Empty : " (Wanted: " + x.Label + ")")}").ToImmutableList());
reverseDependsView.SetSource(graph.Edges.Where(x => x.End.Equals(selectedNode))
.Select(x => $"{x.Start}{(string.IsNullOrEmpty(x.Label) ? string.Empty : " (Wanted: " + x.Label + ")")}").ToImmutableList());
}

That's basically it and it's fast as heck. Probably to be expected from the folks that brought you Midnight Commander.

Are you working on any utilities or cool projects and might want to consider - gasp - text mode over a website?

Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!


© 2019 Scott Hanselman. All rights reserved.
     

Docker Desktop for WSL 2 integrates Windows 10 and Linux even closer

Jul 30, 2019

Description:

Being able to seamlessly run Linux on Windows is making a bunch of common development tasks easier. When you're running WSL2 (Windows Subsystem for Linux 2) in a version of Windows 10 greater than build 18945, a BUNCH of useful and interesting scenarios light up and stuff just works.

Docker for Windows (download the Docker Desktop for WSL 2 Tech preview here) is great, but it has historically worked on Windows by creating a Hyper-V virtual machine called Moby that is visible within the Hyper-V client. It's a utility VM, but it's one you're aware of.

Docker for Windows using WSL2

However, if WSL2 runs a real Linux kernel in Windows 10 and it's managing a virtual machine platform underneath (and not visible to) Hyper-V client tools, then why not just let WSL2 handle containers for us?

That's exactly what the Docker Desklop WSL 2 Tech Preview aims to do. And just like WSL 2, it's fast.

...the time required to start a Docker daemon after a cold start is significantly faster. It takes less than 2 seconds to start the Docker daemon when compared to tens of seconds in the current version of Docker Desktop.

Once you've got a Linux (Ubuntu or the like) set up in WSL 2, you can right click on Docker Deskop and click "WSL 2 Tech Preview." This is a goofy and not-super-intuitive UI for now but it's a moment in time.

Click WSL 2 Tech Preview

Then you just hit Start.

NOTE: If you've already installed Docker within WSL 2 at the command line, stop it and let Docker Desktop manage its lifecycle.

Here's the beginnings of their UI.

Docker for WSL2

When I drop out to PowerShell/CMD on Windows I can run "docker context ls."

C:\Users\Scott\Desktop> docker context ls
NAME DESCRIPTION DOCKER ENDPOINT
default Current DOCKER_HOST based configuration npipe:////./pipe/docker_engine
wsl * Docker daemon hosted in WSL 2 npipe:////./pipe/docker_wsl

You can see there's two contexts, and I've run "docker context use wsl" and that's now my default.

Here is docker images from Ubuntu, and again from Windows (in PowerShell Core). They are the same!

Docker images in Ubuntu Docker images from Powershell

Sweet. Here I am using PowerShell Core (which is open source and cross-platform, natch) to manage my builds which are themselves cross-platform and I can run both a docker build or a metal build on both Windows or Linux, all seamlessly on the same box.

building docker images

Also note, Simon from Docker points out "We are using a non default dataroot in this mode to avoid corrupting a datastore you use without docker desktop in case something goes wrong. Stopping the docker desktop wsl daemon and restarting the one you installed manually should bring everything back." I noticed this because my "Windows Docker" and my original WSL2 docker had a list of images that I naively expected to be available here, but this is a new context and new dataroot so you may need to fetch images again in this new world if you're have been historically an active docker user.

So far I'm super impressed. Linux on the Windows Desktop feels right. It's Peanut Butter and Chocolate.

Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!


© 2019 Scott Hanselman. All rights reserved.
     

Ruby on Rails on Windows is not just possible, it's fabulous using WSL2 and VS Code

Jul 25, 2019

Description:

I've been trying on and off to enjoy Ruby on Rails development on Windows for many years. I was doing Ruby on Windows as long as 13 years ago. There's been many valiant efforts to make Rails on Windows a good experience. However, given that Windows 10 can run Linux with WSL (Windows Subsystem for Linux) and now Windows runs Linux at near-native speeds with an actual shipping Linux Kernel using WSL2, Ruby on Rails folks using Windows should do their work in WSL2.

Running Ruby on Rails on Windows Get a recent Windows 10

WSL2 will be released later this year but for now you can easily get it by signing up for Windows Insiders Fast and making sure your version of Windows is 18945 or greater. Just run "winver" to see your build number. Run Windows Update and get the latest.

Enable WSL2

You'll want the newest Windows Subsystem for Linux. From a PowerShell admin prompt run this:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

and head over to the Windows Store and search for "Linux" or get Ubuntu 18.04 LTS directly. Download it, run it, make your sudo user.

Make sure your distro is running at max speed with WSL2. That earlier PowerShell prompt run wsl --list -v to see your distros and their WSL versions.

C:\Users\Scott\Desktop> wsl --list -v
NAME STATE VERSION
* Ubuntu-18.04 Running 2
Ubuntu Stopped 1
WLinux Stopped 1

You can upgrade any WSL1 distro like this, and once it's done, it's done.

wsl --set-version "Ubuntu-18.04" 2

And certainly feel free to get cool fonts and styles and make yourself a nice shiny Linux experience...maybe with the Windows Terminal.

Get the Windows Terminal

Bonus points, get the new open source Windows Terminal for a better experience at the command line. Install it AFTER you've set up Ubuntu or a Linux and it'll auto-populate its menu for you. Otherwise, edit your profiles.json and make a profile with a commandLine like this:

"commandline" : "wsl.exe -d Ubuntu-18.04"

See how I'm calling wsl -d (for distro) with the short name of the distro?

Ubuntu in the Terminal Menu

Since I have a real Ubuntu environment on Windows I can just follow these instructions to set up Rails!

Set up Ruby on Rails

Ubuntu instructions work because it is Ubuntu! https://gorails.com/setup/ubuntu/18.04

Additionally, I can install as as many Linuxes as I want, even a Dev vs. Prod environment if I like. WSL2 is much lighter weight than a full Virtual Machine.

Once Rails is set up, I'll try making a new hello world:

rails new myapp

and here's the result!

Ruby on Rails in the new Windows Terminal

I can also run "explorer.exe ." and launch Windows Explorer and see and manage my Linux files. That's allowed now in WSL2 because it's running a Plan9 server for file access.

Ubuntu files inside Explorer on Windows 10

Install VS Code and the VS Code Remote Extension Pack

I'm going to install the VSCode Remote Extension pack so I can develop from Windows on remote machines OR in WSL or  Container directly. I can click the lower level corner of VS Code or check the Command Palette for this list of menu items. Here I can "Reopen Folder in WSL" and pick the distro I want to use.

Remote options in VS Code

Now that I've opened the folder for development WSL look closely at the lower left corner. You can see I'm in a WSL development mode AND Visual Studio Code is recommending I install a Ruby VS Code extension...inside WSL! I don't even have Ruby and Rails on Windows. I'm going to have the Ruby language servers and VS Code headless parts live in WSL - in Linux - where they'll be the most useful.

Ruby inside WSL

This synergy, this balance between Windows (which I enjoy) and Linux (whose command line I enjoy) has turned out to be super productive. I'm able to do all the work I want - Go, Rust, Python, .NET, Ruby - and move smoothly between environments. There's not a clear separation like there is with the "run it in a VM" solution. I can access my Windows files from /mnt/c from within Linux, and I can always get to my Linux files at \\wsl$ from within Windows.

Note that I'm running rails server -b=0.0.0.0 to bind on all available IPs, and this makes Rails available to "localhost" so I can hit the Rails site from Windows! It's my machine, so it's my localhost (the networking complexities are handled by WSL2).

$ rails server -b=0.0.0.0
=> Booting Puma
=> Rails 6.0.0.rc2 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 3.12.1 (ruby 2.6.2-p47), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop

Here it is in new Edge (chromium). So this is Ruby on Rails running in WSL, as browsed to from Windows, using the new Edge with Chromium at its heart. Cats and dogs, living together, mass hysteria.

Ruby on Rails on Windows from WSL

Even better, I can install the ruby-debug-ide gem inside WSL and now I'm doing interactive debugging from VS Code, but again, note that the "work" is happening inside WSL.

Debugging Rails on Windows

Enjoy!

Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.


© 2019 Scott Hanselman. All rights reserved.
     

System.Text.Json and new built-in JSON support in .NET Core

Jul 23, 2019

Description:

In a world where JSON (JavaScript Object Notation) is everywhere it's long been somewhat frustrating that .NET didn't have built-in JSON support. JSON.NET is great and has served us well but it's remained a 3rd party dependency for basic stuff like an ASP.NET web site or a simple console app.

Back in 2018 plans were announced to move JSON into .NET Core 3.0 as an intrinsic supported feature, and while they're at it, get double the performance or more with Span<T> support and no memory allocations. ASP.NET in .NET Core 3.0 removes the JSON.NET dependency but still allows you to add it back in a single line if you'd like.

NOTE: This is all automatic and built in with .NET Core 3.0, but if you’re targeting .NET Standard or .NET Framework. Install the System.Text.Json NuGet package (make sure to include previews and install version 4.6.0-preview6.19303.8 or higher). In order to get the integration with ASP.NET Core, you must target .NET Core 3.0.

It's very clean as well. Here's a simple example.

using System;
using System.Text.Json;
using System.Text.Json.Serialization;

namespace verysmall
{
class WeatherForecast
{
public DateTimeOffset Date { get; set; }
public int TemperatureC { get; set; }
public string Summary { get; set; }
}

class Program
{
static void Main(string[] args)
{
var w = new WeatherForecast() { Date = DateTime.Now, TemperatureC = 30, Summary = "Hot" };
Console.WriteLine(JsonSerializer.Serialize<WeatherForecast>(w));
}
}
}

The default options result in minified JSON as well.

{"Date":"2019-07-27T00:58:17.9478427-07:00","TemperatureC":30,"Summary":"Hot"}

Of course, when you're returning JSON from a Controller in ASP.NET it's all automatic and with .NET Core 3.0 it'll automatically use the new System.Text.Json unless you override it.

Here's an example where we pull out some fake Weather data (5 randomly created reports) and return the array.

[HttpGet]
public IEnumerable<WeatherForecast> Get()
{
var rng = new Random();
return Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
})
.ToArray();
}

The application/json is used and JSON is returned by default. If the return type was just string, we'd get text/plain. Check out this YouTube video to learn more details about System.Text.Json works and how it was designed. I'm looking forward to working with it more!

Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.


© 2019 Scott Hanselman. All rights reserved.
     

Installing PowerShell with one line as a .NET Core global tool

Jul 18, 2019

Description:

I'PowerShell Mascotm a huge fan of .NET Core global tools. I've done a podcast on Global Tools. Just like Node and other platform have globally tools that can be easily and quickly installed and then used in build scripts, CI/CD (Continuous Integration/Continuous Deployment) systems, or just general command line utilities, .NET Global Tools are easily made (by you!) and distributed via NuGet.

Some cool examples (and there are hundreds) are the "Try .NET" Workshop runner and creator that can you can use to make interactive documentation, or coverlet for code coverage. There's a great and growing list of .NET Core Global Tools on GitHub.

If you've got the .NET SDK installed you can try out a global tool just like this.

dotnet tool install -g dotnetsay

Then run this example with "dotnetsay," it's fun.

stepping back a moment, you may be familiar with PowerShell. It's a scripting language and a command line shell like Bash or DOS or the Windows Command Prompt. You may think of PowerShell as a tool for maintaining and managing Windows Servers.

However in recent years, PowerShell has gone cross platform and runs most anywhere. It's lightweight and has .NET Core at its, ahem, core. You can use PowerShell for scripting systems on any platform and if you're a .NET developer the team has made installing and immediately using PowerShell in scripts a one liner - which is genius. It's PowerShell as a .NET Global Tool.

Here's an example output from my system running Ubuntu. I just "dotnet tool install --global PowerShell."

$ dotnet --version
2.1.502
$ dotnet tool install --global PowerShell
You can invoke the tool using the following command: pwsh
Tool 'powershell' (version '6.2.2') was successfully installed.
$ pwsh
PowerShell 6.1.1
https://aka.ms/pscore6-docs
Type 'help' to get help.
PS /mnt/c/Users/Scott/Desktop>
exit

Here I've checked that I have .NET 2.x or above, then I install PowerShell. I can run scripts or I can drop into the interactive shell. Note the PS prompt and my current directory above.

In fact, PowerShell is so useful as a scripting language when combined with .NET Core that PowerShell has been included as a global tool within the .NET Core 3.0 Preview Docker images since Preview 4. This means you can use PowerShell lines/scripts inside Docker images.

FROM mcr.microsoft.com/dotnet/core/sdk:3.0
RUN pwsh -c Get-Date
RUN pwsh -c "Get-Module -ListAvailable | Select-Object -Property Name, Path"

Being able to easily install PowerShell as a global tool means you can count on it in your scripts, CI/CDs systems, or docker containers. It's also nice to be able to be able to use existing PowerShell scripts cross platform.

I'm impressed with this idea - installing PowerShell itself as a .NET Global Tool. Very clever and useful.

Sponsor: Ossum unifies agile planning, version control, and continuous integration into a smart platform that saves 3x the time and effort so your team can focus on building their next great product. Sign up free.


© 2019 Scott Hanselman. All rights reserved.
     

DragonFruit and System.CommandLine is a new way to think about .NET Console apps

Jul 16, 2019

Description:

There's some interesting stuff quietly happening in the "Console App" world within open source .NET Core right now. Within the https://github.com/dotnet/command-line-api repository are three packages:

System.CommandLine.Experimental System.CommandLine.DragonFruit System.CommandLine.Rendering

These are interesting experiments and directions that are exploring how to make Console apps easier to write, more compelling, and more useful.

The one I am the most infatuated with is DragonFruit.

Historically Console apps in classic C look like this:

#include <stdio.h>

int main(int argc, char *argv[])
{
printf("Hello, World!\n");
return 0;
}

That first argument argc is the count of the number of arguments you've passed in, and argv is an array of pointers to 'strings,' essentially. The actual parsing of the command line arguments and the semantic meaning of the args you've decided on are totally on you.

C# has done it this way, since always.

static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}

It's a pretty straight conceptual port from C to C#, right? It's an array of strings. Argc is gone because you can just args.Length.

If you want to make an app that does a bunch of different stuff, you've got a lot of string parsing before you get to DO the actual stuff you're app is supposed to do. In my experience, a simple console app with real proper command line arg validation can end up with half the code parsing crap and half doing stuff.

myapp.com someCommand --param:value --verbose

The larger question - one that DragonFruit tries to answer - is why doesn't .NET do the boring stuff for you in an easy and idiomatic way?

From their docs, what if you could declare a strongly-typed Main method? This was the question that led to the creation of the experimental app model called "DragonFruit", which allows you to create an entry point with multiple parameters of various types and using default values, like this:

static void Main(int intOption = 42, bool boolOption = false, FileInfo fileOption = null) { Console.WriteLine($"The value of intOption is: {intOption}"); Console.WriteLine($"The value of boolOption is: {boolOption}"); Console.WriteLine($"The value of fileOption is: {fileOption?.FullName ?? "null"}"); }

In this concept, the Main method - the entry point - is an interface that can be used to infer options and apply defaults.

using System;

namespace DragonFruit
{
class Program
{
/// <summary>
/// DragonFruit simple example program
/// </summary>
/// <param name="verbose">Show verbose output</param>
/// <param name="flavor">Which flavor to use</param>
/// <param name="count">How many smoothies?</param>
static int Main(
bool verbose,
string flavor = "chocolate",
int count = 1)
{
if (verbose)
{
Console.WriteLine("Running in verbose mode");
}
Console.WriteLine($"Creating {count} banana {(count == 1 ? "smoothie" : "smoothies")} with {flavor}");
return 0;
}
}
}

I can run it like this:

> dotnet run --flavor Vanilla --count 3
Creating 3 banana smoothies with Vanilla

The way DragonFruit does this is super clever. During the build process, DragonFruit changes this public strongly typed Main to a private (so it's not seen from the outside - .NET won't consider it an entry point. It's then replaced with a Main like this, but you'll never see it as it's in the compiled/generated artifact.

public static async Task<int> Main(string[] args)
{
return await CommandLine.ExecuteAssemblyAsync(typeof(AutoGeneratedProgram).Assembly, args, "");
}

So DragonFruit has swapped your Main for its smarter Main and the magic happens! You'll even get free auto-generated help!

DragonFruit:
DragonFruit simple example program

Usage:
DragonFruit [options]

Options:
--verbose Show verbose output
--flavor <flavor> Which flavor to use
--count <count> How many smoothies?
--version Display version information

If you want less magic and more power, you can use the same APIs DragonFruit uses to make very sophisticated behaviors. Check out the Wiki and Repository for more and perhaps get involved in this open source project!

I really like this idea and I'd love to see it taken further! Have you used DragonFruit on a project? Or are you using another command line argument parser?

Sponsor: Ossum unifies agile planning, version control, and continuous integration into a smart platform that saves 3x the time and effort so your team can focus on building their next great product. Sign up free.


© 2019 Scott Hanselman. All rights reserved.
     

Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing

Jul 11, 2019

Description:

As I've mentioned lately, I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc.

I'm breaking a single machine into a series of small sites BUT I want to still maintain ALL my existing URLs (for good or bad) and the most important one is hanselman.com/blog/ that I now want to point to hanselmanblog.azurewebsites.net.

That means that the Azure Front Door will be receiving all the traffic - it's the Front Door! - and then forward it on to the Azure Web App. That means: hanselman.com/blog/foo -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/bar -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/foo/bar/baz -> hanselmanblog.azurewebsites.net/foo/bar/baz

There's a few things to consider when dealing with reverse proxies like this and I've written about that in detail in this article on Dealing with Application Base URLs and Razor link generation while hosting ASP.NET web apps behind Reverse Proxies.

You can and should read in detail about Azure Front Door here.

It's worth considering a few things. Front Door MAY be overkill for what I'm doing because I have a small, modest site. Right now I've got several backends, but they aren't yet globally distributed. If I had a system with lots of regions and lots of App Services all over the world AND a lot of static content, Front Door would be a perfect fit. Right now I have just a few App Services (Backends in this context) and I'm using Front Door primarily to manage the hanselman.com top level domain and manage traffic with URL routing.

On the plus side, that might mean Azure Front Door was exactly what I needed, it was super easy to set up Front Door as there's a visual Front Door Designer. It was less than 20 minutes to get it all routed, and SSL certs too just a few hours more. You can see below that I associated staging.hanselman.com with two Backend Pools. This UI in the Azure Portal is (IMHO) far easier than the Azure Application Gateway. Additionally, Front Door is Global while App Gateway is Regional. If you were a massive global site, you might put Azure Front Door in ahem, front, and Azure App Gateway behind it, regionally. image

Again, a little overkill as my Pools are pools are pools of one, but it gives me room to grow. I could easily balance traffic globally in the future.

CONFUSION: In the past with my little startup I've used Azure Traffic Manager to route traffic to several App Services hosted all over the global. When I heard of Front Door I was confused, but it seems like Traffic Manager is mostly global DNS load balancing for any network traffic, while Front Door is Layer 7 load balancing for HTTP traffic, and uses a variety of reasons to route traffic. Azure Front Door also can act as a CDN and cache all your content as well. There's lots of detail on Front Door's routing architecture details and traffic routing methods. Azure Front Door is definitely the most sophisticated and comprehensive system for fronting all my traffic. I'm still learning what's the right size app for it and I'm not sure a blog is the ideal example app.

Here's how I set up /blog to hit one Backend Pool. I have it accepting both HTTP and HTTPS. Originally I had a few extra Front Door rules, one for HTTP, one for HTTPs, and I set the HTTP one to redirect to HTTPS. However, Front door charges 3 cents an hour for the each of the first 5 routing rules (then about a penny an hour for each after 5) but I don't (personally) think I should pay for what I consider "best practice" rules. That means, forcing HTTPS (an internet standard, these days) as well as URL canonicalization with a trailing slash after paths. That means /blog should 301 to /blog/ etc. These are simple prescriptive things that everyone should be doing. If I was putting a legacy app behind a Front Door, then this power and flexibility in path control would be a boon that I'd be happy to pay for. But in these cases I may be able to have that redirection work done lower down in the app itself and save money every month. I'll update this post if the pricing changes.

/blog hits the Blog App Service

After I set up Azure Front Door I noticed my staging blog was getting hit every few seconds, all day forever. I realized there are some health checks but since there's 80+ Azure Front Door locations and they are all checking the health of my app, it was adding up to a lot of traffic. For a large app, you need these health checks to make sure traffic fails over and you really know if you app is healthy. For my blog, less so.

There's a few ways to tell Front Door to chill. First, I don't need Azure Front Door doing a GET requests on /. I can instead ask it to check something lighter weight. With ASP.NET 2.2 it's as easy as adding HealthChecks. It's much easier, less traffic, and you can make the health check as comprehensive as you want.

app.UseHealthChecks("/healthcheck");

Next I turned the Interval WAY app so it wouldn't bug me every few seconds.

Interval set to 255 seconds

These two small changes made a huge difference in my traffic as I didn't have so much extra "pinging."

After setting up Azure Front Door, I also turned on Custom Domain HTTPs and pointing staging to it. It was very easy to set up and was included in the cost.

Custom Domain HTTPS

I haven't decided if I want to set up Front Door's caching or not, but it might mean an easier, more central way than using a CDN manually and changing the URLs for my sites static content and images. In fact, the POP (Point of Presense) locations for Front Door are the same as those for Azure CDN.

NOTE: I will have to at some point manage the Apex/Naked domain issue where hanselman.com and www.hanselman.com both resolve to my website. It seems this can be handled by either CNAME flattening or DNS chasing and I need to check with my DNS provider to see if this is supported. I suspect I can do it with an ALIAS record. Barring that, Azure also offers a Azure DNS hosting service.

There is another option I haven't explored yet called Azure Application Gateway that I may test out and see if it's cheaper for what I need. I primarily need SSL cert management and URL routing.

I'm continuing to explore as I build out this migration plan. Let me know your thoughts in the comments.

Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today


© 2019 Scott Hanselman. All rights reserved.
     

Dealing with Application Base URLs and Razor link generation while hosting ASP.NET web apps behind Reverse Proxies

Jul 9, 2019

Description:

Updating my site to run on AzureI'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc.

I'm also moving from an ASP.NET 4 (was ASP.NET 2 until recently) site to ASP.NET Core 2.x LTS and changing my URL structure. I am aiming to save money but I'm not doing this as a "spend basically nothing" project. Yes, I could convert my site to a static HTML generated blog using any number of great static site generators, or even a Headless CMS. Yes I could host it in Azure Storage fronted by a CMS, or even as a series of Azure Functions. But I have 17 years of content in DasBlog, I like DasBlog, and it's being actively updated to .NET Core and it's a fun app. I also have custom Razor sites in the form of my podcast site and they work great with a great workflow. I want to find a balance of cost effectiveness, features, ease of use, and reliability.  What I have now is a sinking feeling like my site is gonna die tomorrow and I'm not ready to deal with it. So, there you go.

Currently my sites live on a real machine with real folders and it's fronted by IIS on a Windows Server. There's an app (an IIS Application, to be clear) leaving at \ so that means hanselman.com/ hits / which is likely c:\inetpub\wwwroot full stop.

For historical reasons, when you hit hanselman.com/blog/ you're hitting the /blog IIS Application which could be at d:\whatever but may be at c:\inetpub\wwwroot\blog or even at c:\blog. Who knows. The Application and ASP.NET within it knows that the site is at hanselman.com/blog.

That's important, since I may write a URL like ~/about when writing code. If I'm in the hanselman.com/blog app, then ~/about means hanselman.com/blog/about. If I write /about, that means hanselman.com/about. So the ~ is a shorthand for "starting at this App's base URL." This is great and useful and makes Link generation super easy, but it only works if your app knows what it's server-side base URL is.

To be clear, we are talking about the reality of the generated URL that's sent to and from the browser, not about any physical reality on the disk or server or app.

I've moved my world to three Azure App Services called hanselminutes, hanselman, and hanselmanblog. They have names like http://hanselman.azurewebsites.net for example.

ASIDE: You'll note that hitting hanselman.azurewebsites.com will hit an app that looks stopped. I don't want that site to serve traffic from there, I want it to be served from http://hanselman.com, right? Specifically only from Azure Front Door which I'll talk about in another post soon. So I'll use the Access Restrictions and Software Based Networking in Azure to deny all traffic to that site, except traffic from Azure - in this case, from the Azure Front Door Reverse Proxy I'll be using.

That looks like this in this Access Restrictions part of the Azure Portal.

Only allowing traffic from Azure

Since the hanselman.com app will point to hanselman.azurewebsites.net (or one of its staging slots) there's no issue with URL generation. If I say / I mean /, the root of the site. If I generate a URL like "~/about" I'll get hanselman.com/about, right?

But with http://hanselmanblog.azurewebsites.net it's different.

I want hanselman.com/blog/ to point to hanselmanblog.azurewebsites.net.

That means that the Azure Front Door will be receiving traffic, then forward it on to the Azure Web App. That means:

hanselman.com/blog/foo -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/bar -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/foo/bar/baz -> hanselmanblog.azurewebsites.net/foo/bar/baz

There's a few things to consider when dealing with reverse proxies like this.

Is part of the /path being removed or is a path being added?

In the case of DasBlog, we have a configuration setting so that the app knows where it LOOKS like it is, from the Browser URL's perspective.

My blog is at /blog so I add that in some middleware in my Startup.cs. Certainly YOU don't need to have this in config - do whatever works for you as long as context.Request.PathBase is set as the app should see it. I set this very early in my pipeline.

That if statement is there because most folks don't install their blog at /blog, so it doesn't add the middleware.

//if you've configured it at /blog or /whatever, set that pathbase so ~ will generate correctly
Uri rootUri = new Uri(dasBlogSettings.SiteConfiguration.Root);
string path = rootUri.AbsolutePath;

//Deal with path base and proxies that change the request path
if (path != "/")
{
app.Use((context, next) =>
{
context.Request.PathBase = new PathString(path);
return next.Invoke();
});
}

Sometimes you want the OPPOSITE of this. That would mean that I wanted, perhaps hanselman.com to point to hanselman.azurewebsites.net/blog/. In that case I'd do this in my Startup.cs's ConfigureServices:

app.UsePathBase("/blog");

Be aware that If you're hosting ASP.NET Core apps behind Nginx or Apache or really anything, you'll also want ASP.NET Core to respect  X-Forwarded-For and other X-Forwarded standard headers. You'll also likely want the app to refuse to speak to anyone who isn't a certain list of proxies or configured URLs.

I configure these in Startup.cs's ConfigureServices from a semicolon delimited list in my config, but you can do this in a number of ways.

services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders = ForwardedHeaders.All;
options.AllowedHosts = Configuration.GetValue<string>("AllowedHosts")?.Split(';').ToList<string>();
});

Since Azure Front Door adds these headers as it forwards traffic, from my app's point of view it "just works" once I've added that above and then this in Configure()

app.UseForwardedHeaders();

There seems to be some confusion on hosting behind a reverse proxy in a few GitHub Issues. I'd like to see my scenario ( /foo -> / ) be a single line of code, as we see that the other scenario ( / -> /foo ) is a single line.

Have you had any issues with URL generation when hosting your Apps behind a reverse proxy?

Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today


© 2019 Scott Hanselman. All rights reserved.
     

Real World Cloud Migrations: CDNs are an easy improvement to legacy apps

Jul 4, 2019

Description:

I'm doing a quiet backend migration/update to my family of sites. If I do it right, there will be minimal disruption. Even though I'm a one person show (plus Mandy my podcast editor) the Cloud lets me manage digital assets like I'm a whole company with an IT department. Sure, I could FTP files directly into production, but why do that when I've got free/cheap stuff like Azure DevOps.

As I'm one person, I do want to keep costs down whenever possible and I've said so in my "Penny Pinching in the Cloud" series. However, I do pay for value, so I'll try to keep costs down whenever possible, but I will pay for things I feel are useful or provide me with a quality product and/or save me time and hassle. For example, Azure Pipelines gives one free Microsoft-hosted CI/CD pipeline and 1 hosted job with 1800 minutes a month. This should be more than enough for my set up.

Additionally, sometimes I'll shift costs in order to both save money and improve performance as when I moved my Podcast's image hosting over to an Azure CDN in 2013. Fast forward 6 years and since I'm doing this larger migration, I wanted to see how easy it would be to migrate even more of my static assets to a CDN.

Make a Storage Account and ensure that its Access Level is Public if you intend folks to be able to read from it over HTTP GETs.

Public Access Level

With in the Azure Portal you can make a CDN Profile that uses Microsoft, Akamai, or Verizon for the actual Content Distribution Network. I picked Microsoft's because I have experience with Akamai and Verizon (they were solid) and I wanted to see how the new Microsoft one does. It claims to put content within 50ms of 60 countries.

You'll have a CDN endpoint host name that points to an Origin. That Origin is the Origin of your content. The Origin can be an existing place you have stuff so the CDN will basically check there first, cache things, then serve the content. Or it can be an existing WebApp or in my case, Storage.

Azure Storage Accounts for my blog

I didn't want to make things too complex, so I have a single Storage Account with a few containers. Then I mapped custom endpoints for both my blog AND my podcast, but they share the same storage. I also took advantage of the free SSL Certs so images.hanselman.com and images.hanselminutes.com are both SSL. Gotta get those "A" grades from https://securityheaders.com right?

Custom domains for my CDN

Then, just grab the Azure Storage Explorer. It's free and cross platform. In fact, you can get Storage Explorer as an app that runs locally, or you can check it out in the Azure  Portal/in-browser as well, here. I've uploaded all my main assets (images used in my CSS, blog, headers, etc).

I've settled on a basic scheme where anything that was "/images/foo.png" is now "https://images.hanselman.com/blog/foo.png" or /main/ or /podcast/ depending on who is using it. This made search and replaces reliable and easy.

It's truly a less-than-an-hour operation to enable a CDN on an existing site. While I've chose to use Azure Storage as my backing store, you can just use your existing site's /images folder and just change your markup to pull from the CDN's URL. I recommend you make a simple cdn.yourdomain.com or images.yourdomain.com. This can also easily be enabled with a CNAME in less than an hour. Having your images at another URL via a subdomain CNAME also allows for even more download parallelism from the browser.

This is all in my Staging so you won't see it quite yet until I flip the switch on the whole migration.

Exploring future possibilities for dynamic image content

I haven't yet moved all my blog content images (a few gigs, but many thousands of images) to a CDN as I don't want to change my existing publishing workflow with Open Live Writer. I'm considering a flow that keeps the images uploading to the Web App but then using the Custom Origin Path options in the Azure CDN that will have images get picked up Web App then get served by the CDN. In order to manage 17 years of images though, I'd need to catch URLs like this:

https://www.hanselman.com/blog/content/binary/Windows-Live-Writer/0409a9d5fae6_F552/image_9.png

and redirect them to

https://images.hanselman.com/blog/content/binary/Windows-Live-Writer/0409a9d5fae6_F552/image_9.png

As I see it, there's a few options to get images from a GET of /blog/content/binary to be served by a CDN:

Dynamically Rewrite the URLs on the way OUT (as the HTML is generated) CPU expensive, but ya it's cached in my WebApp Rewriting Middleware to redirect (301?) requests to the new images location Easiest option, but costs everyone a 301 on all image GETs Programmatically change the stored markup (basically forloop over my "database," search and replace, AND ensure future images use this new URL) A hassle, but a one time hassle Not sure about future images, I might have to change my publishing flow AND run the process on new posts occasionally.

What are your thoughts on if I should go all the way and manage EVERY blog image vs the low hanging fruit of shared static assets? Worth it, or overkill?

The learning process continues!

Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download now.
© 2019 Scott Hanselman. All rights reserved.
     

Real World Cloud Migrations: Moving a 17 year old series of sites from bare metal to Azure

Jul 2, 2019

Description:

Technical Debt has a way of sneaking up on you. While my podcast site and the other 16ish sites I run all live in Azure and have a nice CI/CD pipeline with Azure DevOps, my main "Hanselman.com" series of sites and mini-sites has lagged behind. I'm still happy with its responsive design, but the underlying tech has started to get more difficult to manage and build and I've decided it's time to make some updates.

Moving sites to Azure DevOps

I want to be able to make these updates and have a clean switch over so that you, the reader, don't notice a difference. There's a number of things to think about when doing any migration like this, realizing it'll take some weeks (or months if you're a bigger company that just me).

Continuous Deployment/Continuous Integration I host my code on GitHub and Azure DevOps now lets you log in with GitHub and does a fine job of building AND deploying your code (while running tests AND allowing for manual quality gates) so I want to make sure my sites have a nice clean "check in and go live" process. I'll also be using Azure App Services and Deployment Slots, so I'll have a dev/test/staging site and production, like a real professional. No more editing text files in production. Well, at least, I won't tell you when I'm editing text file in production. Technology Update Hanselman.com proper (not the blog) and the mini pages/sites underneath it run on ASP.NET 4.0 and WebForms. I was able to easily move the main site over to ASP.NET Razor Pages. Razor is just so elegant, as it's basically just HTML then you type @ and you're in C# (Razor). More on that below, but the upgrade was a day as the home page and minisites are largely readonly. The Blog, hosted at /blog will be more challenging given I don't want to break two decades years of URLs, along with the fact that it's running DasBlog on a recently upgraded .NET 4.0. DasBlog was originally made in .NET 1, then upgraded to .NET 2, so this is 17 years of technical debt. That said, the .NET Standard along with open source cross-platform .NET Core has allowed us - with the leadership of Mark Downie - to create DasBlog Core. DasBlog Core shares the core reliable (if crusty) engine of DasBlog along with an all new system of URL writing using ASP.NET Core middleware, as well as a complete re-do of the (well ahead of its time) DasBlog Theming Engine, now based on Razor Pages. It's brilliant. This is in active development. Azure Front Door Because I'm moving from a single machine running IIS to Azure, I'll want to split things apart to remove single points of failture. I'll use Azure Front Door to manage my URL structure and act as a front end cache as well as distribute traffic to multiple Azure App Services (Web Apps). URL management Are you changing your URLs and URL structure? Remember that URLs are UI and they matter. I've long wanted to remove the "aspx" extension from my URLs, as well as move the TitleCaseBlogPostThing to a more "modern" title-case-blog-post-thing style. I need to do this in a way that updates my google sitemap, breaks zero URLs, 301 redirects to the new style, and uses rel=canonical in a smart way. Shared Assets/CDNs/Front Door Since I run a family of sites, there's an opportunity to use a CDN as well and some clean CNAME DNS such that images.hanselman.com and images.hanselminutes.com can share assets. Since the Azure CDN is easy to setup and offers free SSL certs and pay-as-you go, I'll set both of those CNAMES up to point to the same Azure Storage where I'll keep images, show pics, CSS, and JS.

I'll be blogging the whole process. What do you want to hear/learn about?

Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download now.


© 2019 Scott Hanselman. All rights reserved.
     

Git is case-sensitive and your filesystem may not be - Weird folder merging on Windows

Jun 27, 2019

Description:

I was working on DasBlog Core (an .NET Core cross-platform update of the ASP.NET WebForms-based blogging software that runs this blog) with Mark Downie, the new project manager, and Shayne Boyer. This is part of a larger cloud re-architecture of hanselman.com and the systems that run this whole site.

Shayne was working on getting a DasBlog Core CI/CD (Continuous Integration/Continuous Development) running in Azure DevOps' build system. We wanted individual build pipelines to confirm that DasBlog Core was in fact, cross-platform, so we needed to build, test, and run it on Windows, Linux, and Mac.

The build was working great on Windows and Mac...but failing on Linux. Why?

Well, like all things, it's complex.

Windows has a case-insensitive file system. By default, Mac uses a case-insensitive file system.

Since Git 1.5ish there's been a setting

git config --global core.ignorecase true

but you should always be aware of what a setting does before you just set it.

If you're not careful, you or someone on your team can create a case sensitive file path in your git index while you're using a case insensitive operating system like Windows or Mac. If you do this, you'll be able to end up with two separate entries from git's perspective. However Windows will silently merge them and see just one.

Here's our themes folder structure as seen on GitHub.com.

Case insenstive folder names

But when we clone it on Mac or Windows, we see just one folder.

DasBlog as a single folder in VS Code

Turns out that six months ago one of us introduced another folder with the name dasblog while the original was DasBlog. When we checked them on Mac or Windows the files ended up in merged into one folder, but on Linux they were/are two, so the build fails.

You can fix this in a few ways. You can rename the file in a case-sensitive way and commit the change:

git mv --cached name.txt NAME.TXT

Please take care and back up anything you don't understand.

If you're renaming a directory, you'll do a two stage rename with a temp name.

git mv foo foo2
git mv foo2 FOO
git commit -m "changed case of dir"

Be safe out there!

Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!


© 2019 Scott Hanselman. All rights reserved.
     

Adding Reaction Gifs for your Build System and the Windows Terminal

Jun 25, 2019

Description:

So, first, I'm having entirely too much fun with the new open source Windows Terminal. If you've got the latest version of Windows (go run Windows Update and do whatever it takes) then you can download the Windows Terminal from the Microsoft Store! This is a preview release (think v0.2) but it'll automatically update, often, from the Windows Store if you have Windows 10 version 18362.0 or higher.

One of the most fun things is that you can have background images. Even animated gifs! You can add those images in your Settings/profile.json like this.

"backgroundImage": "c:/users/scott/desktop/doug.gif",
"backgroundImageOpacity": 0.7,
"backgroundImageStretchMode": "uniformToFill

The profile.json is just JSON and you can update it. I could even update it programmatically if I wanted to parse it and mess about.

BUT. Enterprising developer Chris Duck created a lovely PowerShell Module called MSTerminalSettings that lets you very easily make Profile changes with script.

For example, Mac developers who use iTerm often go to https://iterm2colorschemes.com/ and get new color schemes for their consoles. Now Windows folks can as well!

From his docs, this example downloads the Pandora color scheme from https://iterm2colorschemes.com/ and sets it as the color scheme for the PowerShell Core terminal profile.

Invoke-RestMethod -Uri 'https://raw.githubusercontent.com/mbadolato/iTerm2-Color-Schemes/master/schemes/Pandora.itermcolors' -OutFile .\Pandora.itermcolors
Import-Iterm2ColorScheme -Path .\Pandora.itermcolors -Name Pandora
Get-MSTerminalProfile -Name "PowerShell Core" | Set-MSTerminalProfile -ColorScheme Pandora

That's easy! Then I was talking to Tyler Leonhardt and suggested that we programmatically change the background using a folder full of Animated Gifs. I happen to have such a folder (with 2000 categorized gif classics) so we started coding and streamed the whole debacle on Tyler's Twitch!

The result is Windows Terminal Attract Mode and it's a hot mess and it is up on GitHub and all set up for PowerShell Core.

Remember that "Attract mode" is the mode an idle arcade cabinet goes into in order to attract passersby to play, so clearly the Terminal needs this also.

./AttractMode.ps1 -name "profile name" -path "c:\temp\trouble" -secs 5

It's a proof of concept for now, and it's missing background/runspace support, being wrapped up in a proper module, etc but the idea is solid, building on a solid base, with improvements to idiomatic PowerShell Core already incoming. Right now it'll run forever. Wrap it in Start-Job if you like as well and can stand it.

I've made aliases so the new Windows Terminal shows REACTION GIFS for my build system and tests! pic.twitter.com/jpPSsrUoSO

— Scott Hanselman (@shanselman) June 28, 2019

The next idea was to have reactions gifs to different developer situations. Break the build? Reaction Gif. Passing tests? Reaction Gif.

Here's a silly proof (not refactored) that aliases "dotnet build" to "db" with reactions.

#messing around with build reaction gifs

Function DotNetAlias {
dotnet build
if ($?) {
Start-job -ScriptBlock {
d:\github\TerminalAttractMode\SetMoodGif.ps1 "PowerShell Core" "D:\Dropbox\Reference\Animated Gifs\chrispratt.gif"
Start-Sleep 1.5
d:\github\TerminalAttractMode\SetMoodGif.ps1 "PowerShell Core" "D:\Dropbox\Reference\Animated Gifs\4003cn5.gif"
} | Out-Null
}
else {
Start-job -ScriptBlock {
d:\github\TerminalAttractMode\SetMoodGif.ps1 "PowerShell Core" "D:\Dropbox\Reference\Animated Gifs\idk-girl.gif"
Start-Sleep 1.5
d:\github\TerminalAttractMode\SetMoodGif.ps1 "PowerShell Core" "D:\Dropbox\Reference\Animated Gifs\4003cn5.gif"
} | Out-Null

}
}

Set-Alias -Name db -value DotNetAlias

I added the Start-job stuff so that the build finishes and the Terminal returns control to you while the gifs still are updating. Runspace support would be smart as well.

Some other ideas? Giphy support. Random mood gifs. Pick me ups. You get the idea.

Later, Brandon Olin jumped in with this gem. Why not get a reaction gif if anything goes wrong in your last command? ERRORLEVEL 1? Explode.

Why are we doing this? Because it sparks joy, y'all.

Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!


© 2019 Scott Hanselman. All rights reserved.
     

You can now download the new Open Source Windows Terminal

Jun 20, 2019

Description:

Last month Microsoft announced a new open source Windows Terminal! It's up at https://github.com/microsoft/Terminal and it's great, but for the last several weeks you've had to build it yourself as a Developer. It's been very v0.1 if you know what I mean.

Today you can download the Windows Terminal from the Microsoft Store! This is a preview release (think v0.2) but it'll automatically update, often, from the Windows Store if you have Windows 10 version 18362.0 or higher. Run "winver" to make sure.

Windows Terminal

If you don't see any tabs, hit Ctrl-T and note the + and the pull down menu at the top there. Under the menu go to Settings to open profiles.json. Here's mine on one machine.

Here's some Hot Windows Terminal Tips

You can do background images, even animated, with opacity (with useAcrylic off):

"backgroundImage": "c:/users/scott/desktop/doug.gif",
"backgroundImageOpacity": 0.7,
"backgroundImageStretchMode": "uniformToFill

You can edit the key bindings to your taste in the "key bindings" section. For now, be specific, so the * might be expressed as Ctrl+Shift+8, for example.

Try other things like cursor shape and color, history size, as well as different fonts for each tab.

"cursorShape": "vintage"

If you're using WSL or WSL2, use the distro name like this in your new profile:

"wsl.exe -d Ubuntu-18.04"

If you like Font Ligatures or use Powerline, consider Fira Code as a potential new font.

I'd recommend you PIN terminal to your taskbar and start menu, but you can run windows terminal from the command "wt" from Windows R or from anotherc console. That's just "wt" and enter!

Try not just "Ctrl+Mouse Scroll" but also "Ctrl+Shift+Mouse Scroll" and get your your whole life!

Remember that the definition of a shell is someone fluid, so check out Azure Cloud Shell, in your terminal!

Windows Terminal menus

Also, let's start sharing nice color profiles! Share your new ones as a Gist in this format. Note the name.

{
"background" : "#2C001E",
"black" : "#4E9A06",
"blue" : "#3465A4",
"brightBlack" : "#555753",
"brightBlue" : "#729FCF",
"brightCyan" : "#34E2E2",
"brightGreen" : "#8AE234",
"brightPurple" : "#AD7FA8",
"brightRed" : "#EF2929",
"brightWhite" : "#EEEEEE",
"brightYellow" : "#FCE94F",
"cyan" : "#06989A",
"foreground" : "#EEEEEE",
"green" : "#300A24",
"name" : "UbuntuLegit",
"purple" : "#75507B",
"red" : "#CC0000",
"white" : "#D3D7CF",
"yellow" : "#C4A000"
}

Note also that this should be the beginning of a wonderful Windows Console ecosystem. This isn't the one terminal to end them all, it's the one to start them all. I've loved alternative consoles for YEARS, whether it be ConEmu or Console2 many years ago, I've long declared that Text Mode is a missed opportunity.

Remember also that Terminal !== Shell and that you can bring your shell of choice into your Terminal of choice! If you want the deep architectural dive, be sure to watch the BUILD 2019 technical talk with some of the developers or read about ConPTY and how to integrate with it!

Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.


© 2018 Scott Hanselman. All rights reserved.
     

Dynamically generating robots.txt for ASP.NET Core sites based on environment

Jun 18, 2019

Description:

I'm putting part of older WebForms portions of my site that still run on bare metal to ASP.NET Core and Azure App Services, and while I'm doing that I realized that I want to make sure my staging sites don't get indexed by Google/Bing.

I already have a robots.txt, but I want one that's specific to production and others that are specific to development or staging. I thought about a number of ways to solve this. I could have a static robots.txt and another robots-staging.txt and conditionally copy one over the other during my Azure DevOps CI/CD pipeline.

Then I realized the simplest possible thing would be to just make robots.txt be dynamic. I thought about writing custom middleware but that sounded like a hassle and more code that needed. I wanted to see just how simple this could be.

You could do this as a single inline middleware, and just lambda and func and linq the heck out out it all on one line You could write your own middleware and do lots of options, then activate it bested on env.IsStaging(), etc. You could make a single Razor Page with environment taghelpers.

The last one seemed easiest and would also mean I could change the cshtml without a full recompile, so I made a RobotsTxt.cshtml single razor page. No page model, no code behind. Then I used the built-in environment tag helper to conditionally generate parts of the file. Note also that I forced the mime type to text/plain and I don't use a Layout page, as this needs to stand alone.

@page
@{
Layout = null;
this.Response.ContentType = "text/plain";
}
# /robots.txt file for http://www.hanselman.com/
User-agent: *
<environment include="Development,Staging">Disallow: /</environment>
<environment include="Production">Disallow: /blog/private
Disallow: /blog/secret
Disallow: /blog/somethingelse</environment>

I then make sure that my Staging and/or Production systems have ASPNETCORE_ENVIRONMENT variables set appropriately.

ASPNETCORE_ENVIRONMENT=Staging

I also want to point out what may look like odd spacing and how some text is butted up against the TagHelpers. Remember that a TagHelper's tag sometimes "disappears" (is elided) when it's done its thing, but the whitespace around it remains. So I want User-agent: * to have a line, and then Disallow to show up immediately on the next line. While it might be prettier source code to have that start on another line, it's not a correct file then. I want the result to be tight and above all, correct. This is for staging:

User-agent: *
Disallow: /

This now gives me a robots.txt at /robotstxt but not at /robots.txt. See the issue? Robots.txt is a file (or a fake one) so I need to map a route from the request for /robots.txt to the Razor page called RobotsTxt.cshtml.

Here I add a RazorPagesOptions in my Startup.cs with a custom PageRoute that maps /robots.txt to /robotstxt. (I've always found this API annoying as the parameters should, IMHO, be reversed like ("from","to") so watch out for that, lest you waste ten minutes like I just did.

public void ConfigureServices(IServiceCollection services)
{
services.AddMvc()
.AddRazorPagesOptions(options =>
{
options.Conventions.AddPageRoute("/robotstxt", "/Robots.Txt");
});
}

And that's it! Simple and clean.

You could also add caching if you wanted, either as a larger middleware, or even in the cshtml Page, like

context.Response.Headers.Add("Cache-Control", $"max-age=SOMELARGENUMBEROFSECONDS");

but I'll leave that small optimization as an exercise to the reader.

UPDATE: After I was done I found this robots.txt middleware and NuGet up on GitHub. I'm still happy with my code and I don't mind not having an external dependency, but it's nice to file this one away for future more sophisticated needs and projects.

How do you handle your robots.txt needs? Do you even have one?

Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.


© 2018 Scott Hanselman. All rights reserved.
     

Making a tiny .NET Core 3.0 entirely self-contained single executable

Jun 13, 2019

Description:

I've always been fascinated by making apps as small as possible, especially in the .NET space. No need to ship any files - or methods - that you don't need, right? I've blogged about optimizations you can make in your Dockerfiles to make your .NET containerized apps small, as well as using the ILLInk.Tasks linker from Mono to "tree trim" your apps to be as small as they can be.

Work is on going, but with .NET Core 3.0 preview 6, ILLink.Tasks is no longer supported and instead the Tree Trimming feature is built into .NET Core directly.

Here is a .NET Core 3.0 Hello World app.

225 files, 69 megs

Now I'll open the csproj and add PublishTrimmed = true.

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.0</TargetFramework>
<PublishTrimmed>true</PublishTrimmed>
</PropertyGroup>
</Project>

And I will compile and publish it for Win-x64, my chosen target.

dotnet publish -r win-x64 -c release

Now it's just 64 files and 28 megs!

64 files, 28 megs

If your app uses reflection you can let the Tree Trimmer know by telling the project system about your Assembly, or even specific Types or Methods you don't want trimmed away.

<ItemGroup>
<TrimmerRootAssembly Include="System.IO.FileSystem" />
</ItemGroup>

The intent in the future is to have .NET be able to create a single small executable that includes everything you need. In my case I'd get "supersmallapp.exe" with no dependencies. That's done using PublishSingleFile along with the RuntimeIdentifier in the csproj like this:

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.0</TargetFramework>
<PublishTrimmed>true</PublishTrimmed>
<PublishReadyToRun>true</PublishReadyToRun>
<PublishSingleFile>true</PublishSingleFile>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
</PropertyGroup>
</Project>

At this point you've got everything expressed in the project file and a simple "dotnet publish -c Release" makes you a single exe!

There's also a cool global utility called Warp that makes things even smaller. This utility, combined with the .NET Core 3.0 SDK's now-built-in Tree Trimmer creates a 13 meg single executable that includes everything it needs to run.

C:\Users\scott\Desktop\SuperSmallApp>dotnet warp
Running Publish...
Running Pack...
Saved binary to "SuperSmallApp.exe"

And the result is just a 13 meg single EXE ready to go on Windows.

A tiny 13 meg .NET Core 3 application

If you want, you can combine this "PublishedTrimmed" object with "PublishReadyToRun" as well and get a small AND fast app.

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.0</TargetFramework>
<PublishTrimmed>true</PublishTrimmed>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>
</Project>

These are not just IL (Intermediate Language) assemblies that are JITted (Just in time compiled) on the target machine. These are more "pre-chewed" AOT (Ahead of Time) compiled assemblies with as much native code as possible to speed up your app's startup time. From the blog post:

In terms of compatibility, ReadyToRun images are similar to IL assemblies, with some key differences. IL assemblies contain just IL code. They can run on any runtime that supports the given target framework for that assembly. For example a netstandard2.0 assembly can run on .NET Framework 4.6+ and .NET Core 2.0+, on any supported operating system (Windows, macOS, Linux) and architecture (Intel, ARM, 32-bit, 64-bit). R2R assemblies contain IL and native code. They are compiled for a specific minimum .NET Core runtime version and runtime environment (RID). For example, a netstandard2.0 assembly might be R2R compiled for .NET Core 3.0 and Linux x64. It will only be usable in that or a compatible configuration (like .NET Core 3.1 or .NET Core 5.0, on Linux x64), because it contains native code that is only usable in that runtime environment.

I'll keep exploring .NET Core 3.0, and you can install the SDK here in minutes. It won't mess up any of your existing stuff.

Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!


© 2018 Scott Hanselman. All rights reserved.
     

Visual Studio Code Remote Development over SSH to a Raspberry Pi is butter

Jun 11, 2019

Description:

There's been a lot of folks, myself included, who have tried to install VS Code on the Raspberry Pi. In fact, there's a lovely process for this now. However, we have to ask ourselves is a Raspberry Pi really powerful enough to be running a full development environment and the app being debugged? Perhaps, but maybe this is a job for remote debugging. That means installing Visual Studio Code locally on my Windows or Mac machine, then having Visual Studio code install its headless server component (for ARM7) on the Pi.

In January I blogged about Remote Debugging with VS Code on a Raspberry Pi using .NET Core on ARM. It was, and is, a little hacked together with SSH and wishes. Let's set up a proper VS Code Remote environment so I can be productive on a Pi while still enjoying my main laptop's abilities.

First, can you ssh into your Raspberry Pi without a password prompt? If not, be sure to set that up with OpenSSH, which is now installed on Windows 10 by default. You know you've got it down when you can "ssh pi@mypi" and it just drops you into a remote prompt. Next, get Visual Studio Code Insiders plus Remote Development Extension Uninstall the "Remote - SSH" Extensions, disabling them isn't enough because you want to replace them with... Important - Remote - SSH Nightly Builds

From within VS Code Insiders, hit Ctrl/CMD+P and type "Remote-SSH" for some of the choices.

Remote-SSH options in VS Code

I can connect to Host and VS Code will SSH into the PI and install the VS Code server components in ~./vscode-server-insiders and then connect to them. It will take a minute as its downloading a 25 meg GZip and unzipping it into this temp folder. You'll know you're connected when you see this green badge as seen below that says "SSH: hostname."

Green badge in VS Code - SSH: crowpi

Then when you go "File | Open Folder" from the main menu, you'll get the remote system's files! You are working and editing locally on remote files.

My Raspberry Pi's desktop, remotely

Note here that some of the extensions are NOT installed locally! The Python language services (using Jedi) are running remotely on the Raspberry Pi, so when I get intellisense, I'm getting it remoted from the actual machine I'm developing on, not a guess from my local box.

Some extentions are local and others are remote

When I open a Terminal with Ctrl+~, see that I'm automatically getting a remote terminal and I've even running htop in it!

Check this out, I'm doing a remote interactive debugging session against CrowPi samples running on the Raspberry Pi (in Python 2) remotely from VS Code on my Windows 10 machine! I did need to make one change to the remote settings as it was defaulting to Python3 and I wanted to use Python2 for these samples.

Remote Debugging a Raspberry Pi

This has been a very smooth process and I remain super impressed with the VS Remote Development experience. I'll be looking at containers, and remote WSL debugging soon as well. Next step is to try C#, remotely, which will mean making sure the C# OmniSharp Extension works on ARM and remotely.

Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!


© 2018 Scott Hanselman. All rights reserved.
     

This changes everything for the DIY Diabetes Community - TidePool partners with Medtronic and Dexcom

Jun 6, 2019

Description:

D8f8EEYVsAAqN-DI don’t speak in hyperbole very often, and I want to make sure that you all understand what a big deal this is for the diabetes DIY community. Everything that we’ve worked for for the last 20 years, it all changes now. #WeAreNotWaiting

"You probably didn’t see this coming, [Tidepool] announced an agreement to partner with our friends at Medtronic Diabetes to support a future Bluetooth-enabled MiniMed pump with Tidepool Loop. Read more here: https://www.tidepool.org/blog/tidepool-loop-medtronic-collaboration"

Translation? This means that diabetics will be able to choose their own supported equipment and build their own supported FDA Approved Closed Loop Artificial Pancreases.

Open Source Artificial Pancreases will become the new standard of care for Diabetes in 2019

Every diabetic engineer every, the day after they were diagnosed, tries to solve their (or their loved one's) diabetes with open software and open hardware. Every one. I did it in the early 90s. Someone diagnosed today will do this tomorrow. Every time.

I tried to send my blood sugar to the cloud from a PalmPilot. Every person diagnosed with diabetes ever, does this. Has done this. We try to make our own systems. Then @NightscoutProj happened and #WeAreNotWaiting happened and we shared code and now we sit on the shoulders of people who GAVE THEIR IDEAS TO USE FOR FREE.

D8gKqkqW4AEsK32

Here's the first insulin pump. Imagine a disease this miserable that you'd choose this. Type 1 Diabetes IS NOT FUN. Now we have Bluetooth and Wifi and the Cloud but I still have an insulin pump I bought off of Craigslist.

D8gK05ZXYAAzSLc

Imagine a watch that gives you an electrical shock so you can check your blood sugar. We are all just giant bags of meat and water under pressure and poking the meatbag 10 times a day with needles and #diabetes testing strips SUUUUCKS.

D8gLNCgWkAAbLLi

The work of early #diabetes pioneers is being now leveraged by @Tidepool_org to encourage large diabetes hardware and sensor manufacturers to - wait for it - INTEROPERATE on standards we can talk to.

D8gLi6kW4AMndv2

D8gL61PWwAA3Tz2Just hours after I got off stage speaking on this very topic at @RefactrTech, it turns out that @howardlook and the wonderful friends at @Tidepool_org like @kdisimone and @ps2 and pioneer @bewestisdoing and others announced there are now partnerships with MULTIPLE insulin pump manufacturers AND multiple sensors!

We the DIY #diabetes community declared #WeAreNotWaiting and, dammit, we'd do this ourselves. And now TidePool expressing the intent to put an Artificial Pancreas in the damn App Store - along with Angry Birds - WITH SUPPORT FOR WARRANTIED NEW BLE PUMPS. I could cry.

You see this #diabetes insulin pump? It’s mine. See those cracks? THOSE ARE CRACKS IN MY INSULIN PUMP. This pump does not have a warranty, but it’s the only one that I have if I want an open source artificial pancreas. Now I’m going to have real choices, multiple manufacturers.

D8gMv8OXoAA4V9oIt absolutely cannot be overstated how many people keep this community alive, from early python libraries that talked to insulin pumps, to man in the middle attacks to gain access to our own data, to custom hardware boards created to bridge the new and the old.

To the known in the unknown, the song in the unsung, we in the Diabetes Community appreciate you all. We are standing on the shoulders of giants - I want to continue to encourage open software and open hardware whenever possible. Get involved. 

Also, if you're diabetic, consider buying a Nightscout Xbox Avatar accessory so you can see yourself represented while you game!

Oh, and one other thing, journalists who cover the Diabetes DIY community, please let us read your articles before you write them. They all have mistakes and over-generalizations and inaccuracies and it's awkward to read them. That is all.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Clever little C# and ASP.NET Core features that make me happy

Jun 4, 2019

Description:

Visual StudioI recently needed to refactor my podcast site which is written in ASP.NET Core 2.2 and running in Azure. The Simplecast backed API changed in a few major ways from their v1 to a new redesigned v2, so there was a big backend change and that was a chance to tighten up the whole site.

As I was refactoring I made a few small notes of things that I liked about the site. A few were C# features that I'd forgotten about! C# is on version 8 but there were little happinesses in 6.0 and 7.0 that I hadn't incorporated into my own idiomatic view of the language.

This post is collecting a few things for myself, and you, if you like.

I've got a mapping between two collections of objects. There's a list of all Sponsors, ever. Then there's a mapping of shows where a show might have n sponsors.

Out Var

I have to "TryGetValue" because I can't be sure if there's a value for a show's ID. I wish there was a more compact way to do this (a language shortcut for TryGetValue, but that's another post).

Shows2Sponsor map = null;
shows2Sponsors.TryGetValue(showId, out map); if (map != null) { var retVal = sponsors.Where(o => map.Sponsors.Contains(o.Id)).ToList(); return retVal; } return null;

I forgot that in C# 7.0 they added "out var" parameters, so I don't need to declare the map or its type. Tighten it up a little and I've got this. The LINQ query there returns a List of sponsor details from the main list, using the IDs returned from the TryGetValue.

if (shows2Sponsors.TryGetValue(showId, out var map)) return sponsors.Where(o => map.Sponsors.Contains(o.Id)).ToList(); return null; Type aliases

I found myself building JSON types in C# that were using the "Newtonsoft.Json.JsonPropertyAttribute" but the name is too long. So I can do this:

using J = Newtonsoft.Json.JsonPropertyAttribute;

Which means I can do this:

[J("description")]
public string Description { get; set; }

[J("long_description")] public string LongDescription { get; set; } LazyCache

I blogged about LazyCache before, and its challenges but I'm loving it. Here I have a GetShows() method that returns a List of Shows. It checks a cache first, and if it's empty, then it will call the Func that returns a List of Shows, and that Func is the thing that does the work of populating the cache. The cache lasts for about 8 hours. Works great.

public async Task<List<Show>> GetShows()
{
Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache();
return await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8));
}
private async Task<List<Show>> PopulateShowsCache()
{
List<Show> shows = shows = await _simpleCastClient.GetShows();
_logger.LogInformation($"Loaded {shows.Count} shows");
return shows.Where(c => c.Published == true && c.PublishedAt < DateTime.UtcNow).ToList();
}

What are some little things you're enjoying?

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

What's better than ILDasm? ILSpy and dnSpy are tools to Decompile .NET Code

May 30, 2019

Description:

.NET code (C#, VB, F#, etc) compiles (for the most part) into Intermediate Language (IL) and then makes it way to native code usually by Just-in-time (JIT) compilation on the target machine. When you get a DLL/Assembly, it's pre-chewed but not full juiced, to mix my metaphors.

Often you'll come along a DLL that you want to learn more about. Sometimes you'll want to just see the structure of classes, methods, etc, and other times you want to see the IL - or a close representation of the original C#/VB/F#, etc. You're not looking at the source, you're seeing a backwards projection of the IL as whatever language you want. You're basically taking this pre-chewed food and taking it out of your mouth and getting a decent idea of what it was originally.

I've used ILDasm for years, but it's old and lame and people tease you for using it because they are cruel. ;)

Seriously, though, I use ILDasm - the IL Disassembler - simply because it's already installed. Those tweets got me thinking though that I need to update my options, so I'm trying out ILSpy and dnSpy.

ILSpy

ILSpy has been around for a while and has multiple front-ends, including ones for Linux/Mac/Windows based on Avalonia in the form of AvaloniaSpy. You can also integrate ILSpy into Visual Studio 2017 or 2019 with this extension. There is also a console decompiler and, interestingly, cross-platform PowerShell cmdlets.

ILSpy is a solid .NET decompiler

I've always liked the "Open List" feature of ILSpy where you can open a preconfigured list of assemblies you want to browse, like ASP.NET MVC, .NET 4, etc. A fun open source contribution for you might be to update the included lists with newer defaults. There's so many folks doing great work in open source out there, why not jump in and help them out?

dnSpy

dnSpy has a lovely UI AND a great Console app using the same engine. It's amazingly polished and VERY complete. I was surprised that it also has a full hex editor as well as property pages for common EXE file headers. From their GitHub, dnSpy features

Debug .NET Framework, .NET Core and Unity game assemblies, no source code required Edit assemblies in C# or Visual Basic or IL, and edit all metadata Light and dark themes Extensible, write your own extension High DPI support (per-monitor DPI aware)

dnSpy takes it to the next level with an integrated Debugger, meaning you can attach to a running process and debug it without source code - but it feels like source code because it's decompiling for you. Note where it says C#, I can choose C#, VB, or IL as a "view" on my decompiled code.

dnSpy is amazing for looking inside .NET apps

Here is dnSpy actually debugging ILSpy and stopped at a decompiled breakpoint.

image

There's a lot of great low-level stuff in this space. Another cool tool is Reflexil, a .NET Assembly Editor as well as de4dot by the same mysterious author as dnSpy. JetBrains has the excellent dotPeek and Telerik has JustDecompile. Commercial Tools include Reflector.

What's your favorite?

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Bringing the SpaceOrb game controller forward with an Arduino Bridge via The Orbotron 9001

May 28, 2019

Description:

Thingotron brings your SpaceOrb back to lifeAlmost ten years ago I posted abut the SpaceTec SpaceOrb 360 Controller and that was 15 years after it came out. We are now 25 years into the legend of the SpaceOrb and I will continue to tell the tale. The SpaceOrb is one of a series of innovative "Spacemice" that offer more than just two degress of input freedom. In fact, they offer SIX.

"The puck or ball of a spacemouse can be moved along X, Y and Z axis as well as being twisted rotationally on each of those axis. (Roll, Pitch and Yaw)"

Vic Putz continues to carry a torch for the SpaceOrb, as do I, except he's actually doing something about it. A decade ago I bought an Arduino and an "OrbSheild" from Vic that sat on top and provided a realtime bridge between the RS-232 Serial Port and the modern USB "HID" (Hardware Input Devices) that are used today. The goal is to move behind unsigned device drivers and create a system-agnostic solution that would present an old device in a new driver-free way.

Vic has been working on a new version called the Orbotron 9000/9001 for the last few years and it's currently sold out at his little store. It acts as an interface for the SpaceOrb 360 and comes configured for that device, but should also work with the SpaceBall 5000, SpaceBall 4000FLX, and Magellan SpaceMouse. Code and plans on are GitHub, natch.

SpaceOrb

When you plug the SpaceOrb into the Orbotron 9001 then into your PC it shows up as a Game Controller!

The SpaceOrb as it presents inside Windows as a controller

There's several innovative "six degrees of freedom" games out there , like the "Overload" sequel to Decent on Steam, as well as Retrovirus, and NeonXSZ, as well as open source reimplementations of Descent like DXX Rebirth (give them some love!) and Forsaken.

Modern Xinput games are trickier, but you may have success with https://www.x360ce.com by mapping the orb buttons and axes to a gamepad.

Descent - DXX Re-birth

I'm still exploring this space, but I love that The Internet - with the help of the enterprising and patient - refused to let the good parts of history die, by making innovative and clean bridges between the past and the future.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Piper Command Center BETA - Build a game controller from scratch with Arduino

May 23, 2019

Description:

Piper Command CenterBack in 2018 I posted my annual Christmas List of STEM Toys and the Piper Computer Kit 2 was on the list. My kids love this little wooden "laptop" comprised of a Raspberry Pi and an LCD screen. You spend time going through curated episodes of custom content and build and wire the computer LIVE while it's on!

The Piper folks saw my post and asked me to take a look at the BETA of their Piper Command Center, so my sons and I jumped at the chance. They are actively looking for feedback. It's a chance to build our own game controller!

The Piper Command Center BETA already has a ton of online content and things to try. Their "firmware" is an Arduino sketch and it's all up on GitHub. You'll want to get the Arduino IDE from the Windows Store.

Today the Command Center can look like a Keyboard or a Mouse.

In Mouse Mode (default), the joystick controls cursor movement and the left and right buttons mimic left and right mouse clicks. In Keyboard Mode, the joystick mimics the arrow keys on a keyboard, and the buttons mimic Space Bar (Up), Z (Left), X (Down), and C (Right) keys on a keyboard.

Once it's built you can use the controller to play games in your browser, or soon, with new content on the Piper itself, which runs Minecraft usually. However, you DO NOT need the Piper to get the Piper Command Center. They are separate but complementary devices.

Assemble a real working game controller, understand the basics of an Arduino, and discover physical computing by configuring a joystick, buttons, and more. Ideal for ages 13+.

My son is looking at how he can modify the "firmware" on the Command Center to allow him to play emulators in the browser.

The parts ot the Piper Command Center Parts and Wires for the Piper Command Center

The Piper Command Center comes unassembled, of course, and you get to put it together with a cool blueprint instruction sheet. We had some fun with the wiring and a were off by one a few times, but they've got a troubleshooting video that helped us through it.

Blueprints for the Piper Command Center

It's a nice little bit of kit and I love that it's made of wood. I'd like to see one with a second joystick that could literally emulate an XInput control pad, although that might be more complex than just emulating a mouse or keyboard.

Go check it out. We're happy with it and we're looking forward to whatever direction it goes. The original Piper has updated itself many times in a few years we've had it, and we upgraded it to a 16gig SD Card to support the latest content and OS update.

Piper Command Center is in BETA and will be updated and actively developed as they explore this space and what they can do with the device. As of the time of this writing there were five sketches for this controller.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Visual Studio Code Remote Development may change everything

May 21, 2019

Description:

DevContainer using RustOK, that's a little clickbaity but it's surely impressed the heck out of me. You can read more about VS Code Remote Development (at the time of this writing, available in the VS Code Insiders builds) but here's a little on my first experience with it.

The Remote Development extensions require Visual Studio Code Insiders.

Visual Studio Code Remote Development allows you to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. It effectively splits VS Code in half and runs the client part on your machine and the "VS Code Server" basically anywhere else. The Remote Development extension pack includes three extensions. See the following articles to get started with each of them:

Remote - SSH - Connect to any location by opening folders on a remote machine/VM using SSH. Remote - Containers - Work with a sandboxed toolchain or container-based application inside (or mounted into) a container. Remote - WSL - Get a Linux-powered development experience in the Windows Subsystem for Linux.

Lemme give a concrete example. Let's say I want to do some work in any of these languages, except I don't have ANY of these languages/SDKS/tools on my machine.

Aside: You might, at this point, have already decided that I'm overreacting and this post is nonsense. Here's the thing though when it comes to remote development. Hang in there.

On the Windows side, lots of folks creating Windows VMs in someone's cloud and then they RDP (Remote Desktop) into that machine and push pixels around, letting the VM do all the work while you remote the screen. On the Linux side, lots of folks create Linux VMs or containers and then SSH into them with their favorite terminal, run vim and tmux or whatever, and then they push text around, letting the VM do all the work while you remote the screen. In both these scenarios you're not really client/server, you're terminal/server or thin client/server. VS Code is a thick client with clean, clear interfaces to language services that have location transparency.

I type some code, maybe an object instance, then intellisense is invoked with a press of "." - who does that work? Where does that list come from? If you're running code locally AND in the container, then you need to make sure both sides are in sync, same SDKs, etc. It's challenging.

OK, I don't have the Rust language or toolkit on my machine.

I'll clone this repository:

git clone https://github.com/Microsoft/vscode-remote-try-rust

Then I'll run Code, the Insiders version:

C:\github> git clone https://github.com/Microsoft/vscode-remote-try-rust
Cloning into 'vscode-remote-try-rust'...
Unpacking objects: 100% (38/38), done.
C:\github> cd .\vscode-remote-try-rust\
C:\github\vscode-remote-try-rust [main =]> code-insiders .

Then VS Code says, hey, this is a Dev Container, want me to open it?

There's a devcontainer.json file that has a list of extensions that the project needs. And it will install those VS Extensions inside a Development Docker Container and then access them remotely. This isn't a list of extensions that your LOCAL system needs - you don't want to sully your system with 100 extensions. You want to have just those extensions that you need for the project you're working on. Compartmentalization. You could do development and never install anything on your local machine, but you're finding a sweet spot that doesn't involved pushing text or pixels around.

Reopen in Container

Now look at this screenshot and absorb. It's setting up a dockerfile, sure, with the development tools you want to use and then it runs docker exec and brings in the VS Code Server!

Setting up Rust

Check out the Extensions section of VS Code, and check out the lower left corner. That green status bar shows that we're in a client/server situation. The extensions specific to Rust are installed in the Dev Container and we are using them from VS Code.

Extensions

When I'm typing and working on my code in this way (by the way it took just minutes to get started) I've got a full experience with Intellisense, Debugging, etc.

Intellisense from a container running Rust and VS Code Remote Containers

Here I am doing a live debug session of a Rust app with zero setup other than VS Code Insiders, the Remote Extensions, and Docker (which I already had).

Debugging in VS Code a Rust app within a DevContainer

As I mentioned, you can run within WSL, Containers, or over SSH. It's early days but it's extraordinarily clean. I'm really looking forward to seeing how far and effortless this style of development can go. There's so much less yak shaving! It effectively removes the whole setup part of your coding experience and you get right to it.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Using the Steam Link app to stream PC Games directly to your iPhone or mobile device

May 16, 2019

Description:

Steam Link on iOSI think that we, as an industry, are still figuring game streaming out. It's challenging to find that sweet spot between quality and frames per second, all while respecting the speed of light and the laws of physics.

That said, if you have a a rock solid 5Ghz wireless network, or better yet, a solid wired network, you can do some pretty cool stuff today.

How to stream PC games from Windows 10 to your Xbox One for free

You can use the Xbox app on Windows 10 to stream from your Xbox One to your PC. I use this to play on my Xbox while I walk on my treadmill in my garage. Works great even on my comparatively underpowered Surface Pro 3.

You can also do the opposite if you have a powerful PC. You can run the Xbox Wireless Display app and remote your PC to your Xbox.

Here I am running Batman on my PC with an NVidia 1080, from my Xbox

I also have a Steam Link - it's odd to me that they discontinued this great little device - that I use to stream from my PC to my big TV. However, if you have a Raspberry Pi 3 or 3B+ running Stretch, you can try a beta of Steam Link and effectively make your own little Steam Link dedicated device. Bonus points if you 3D Print a replica case to make it look like a Steam Link.

sudo apt update
sudo apt install steamlink
steamlink

Today, however, Steam Link was released (after a rejection) to the Apple iOS App store so I had to try this out from my iPhone XS Max. I also have a Steam Controller, which, while weird (i.e. it's not an Xbox Controller) is the most configurable controller ever and it can emulate a mouse pretty well when needed. They released a new Firmware for the Steam Controller that enabled BLE support which allows it to be used as an MFi controller on an iOS device. You do need to memorize or write down the incantations to switch between original RF mode and BLE mode, though.

Aside: MFi is almost criminally neglected and a Apple has utterly dropped the ball and missed an opportunity to REALLY make iOS devices more than casual gaming devices. Only in the last few years have decent MFi Controllers been released and game support is still embarrassingly spotty. I've used my now-discontinued SteelSeries Stratus a handful of times.

You install the app, pair your controller with your iOS device/phone/tablet, then test your network. I'm using an Amplifi Mesh Network so I can control how my devices connect to the network, I can manage band selection, as well as Quality of Service (QoS) so I didn't have any trouble getting 55 Mb/s from my wired computer to my wireless iPhone.

Steaming bandwidth test successful up to 55 Mb/s

Steaming bandwidth test successful up to 55 Mb/s

The quality is up and down as it appears they are focused on maintaining a high framerate. Here's a captured local video of me playing Batman from my high end rig streaming to Steam Link on my iPhone.

Here’s a better quality video with the iPhone at full power and connect to 5ghz using Steam Link pic.twitter.com/N2UZ0P2G4n

— Scott Hanselman (@shanselman) May 18, 2019

What has been YOUR experience with Game Streaming?

Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!
© 2018 Scott Hanselman. All rights reserved.
     

Introducing the Try .NET Global Tool - interactive in-browser documentation and workshop creator

May 15, 2019

Description:

Learn .NET or easily author your own workshopIf you find yourself learning C# and .NET and come upon the "Run your first C# Program" documentation you may have noticed a "Try the code in your browser" button that lets you work through your first app entirely online, with no local installation! You're running C# and .NET in the browser! It's a great way to learn that is familiar to folks who learn JavaScript.

The language team at Microsoft wants to bring that easy on-ramp to everyone who wants to learn .NET.

The .NET Foundation has published a lot of free .NET presentations and workshops that you can use today to teach open source .NET to your friends, colleagues, or students. However these do encourage you to install a number of prerequisites and we believe that there might be an easier on-ramp to learning .NET.

Today we're announcing that on ramp - the Try .NET global tool!

Here's the experience. Once you have the .NET SDK - Pick the one that says you want to "Build Apps." Just get the "try" tool! Try it!

Open a terminal/command prompt and type dotnet tool install --global dotnet-try

Now you can either navigate to an empty folder and type

dotnet try demo

or, even better, do this!

ACTION: Clone the samples repo with
git clone https://github.com/dotnet/try -b samples
then run
"dotnet try"
and that's it!

NOTE: Make sure you get the samples branch until we have more samples!

C:\Users\scott\Desktop> git clone https://github.com/dotnet/try -b samples
Cloning into 'try'...
C:\Users\scott\Desktop> cd .\try\Samples\
C:\Users\scott\Desktop\try\Samples [samples ≡]> dotnet try
Hosting environment: Production
Content root path: C:\Users\scott\Desktop\try\Samples
Now listening on: http://localhost:5000
Now listening on: https://localhost:5001

Your browser will pop up and you're inside a local interactive workshop! Notice the URL? You're browsing your *.md files and the code inside is runnable. It's all local to you! You can put this on a USB key and learn offline or in disconnected scenarios which is great for folks in developing countries. Take workshops home and remix! Run an entire workshop in the browser and the setup instructions for the room is basically "get this repository" and type "dotnet try!"

Try .NET interactive local documentation

This is not just a gentle on-ramp that teaches .NET without yet installing Visual Studio, but it also is a toolkit for you to light up your own Markdown.

Just add a code fence - you may already be doing this! Note the named --region there? It's not actually running the visible code in the Markdown...it's not enough! It's compiling your app and capturing the result of the named region in your source! You could even make an entire .NET interactive online book.

### Methods
A **method** is a block of code that implements some action. `ToUpper()` is a method you can invoke on a string, like the *name* variable. It will return the same string, converted to uppercase.
``` cs --region methods --source-file .\myapp\Program.cs --project .\myapp\myapp.csproj
var name = "Friends";
Console.WriteLine($"Hello {name.ToUpper()}!");
```

And my app's code might look like:

using System;

namespace HelloWorld
{
class Program
{
static void Main(string[] args)
{
#region methods
var name = "Friends"
Console.WriteLine($"Hello {name.ToUpper()}!");
#endregion
}
}
}

Make sense?

NOTE: Closing code fences ``` must be on a newline.

Hey you! YOU have some markdown or just a readme.md in your project! Can you light it up and make a workshop for folks to TRY your project?

Code Fences within Markdown

Here I've typed "dotnet try verify" to validate my markdown and ensure my samples compile. Dotnet Try is both a runner and a creator's toolkit.

Compiling your workshop

Today "dotnet try" uses .NET Core 2.1 but if you have .NET Core 3 installed you can explore the more complex C# samples here with even more interesting and sophisticated presentations. You'll note in the markdown the --session argument for the code fence allows for interesting scenarios where more than one editor runs in the context of one operation!

image

I'd love to see YOU create workshops with Try .NET. It's early days and this is an Alpha release but we think it's got a lot of promise. Try installing it and running it now and later head over to https://github.com/dotnet/try to file issues if you find something or have an idea.

Go install "dotnet try" locally now, and remember this is actively being developed so you can update it easily and often like this!

dotnet tool update -g dotnet-try

There's lots of ideas planned, as well as the ability to publish your local workshop as an online one with Blazor and WASM. Here's a live example.

Watch for an much more in-depth post from Maria from my team on Thursday on the .NET blog!

Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!


© 2018 Scott Hanselman. All rights reserved.
     

Systems Thinking as important as ever for new coders

May 9, 2019

Description:

Two programmers having a chatI was at the Microsoft BUILD conference last week and spent some time with a young university student who came prepared. I was walking between talks and he had a sheet of paper organized with questions. We sat down and went through the sheet.

One of his main questions that followed a larger theme was, since his class in South Africa was learning .NET Framework on Windows, should he be worried? Shouldn't they be learning the latest .NET Core and the latest C#? Would they be able to get jobs later if they aren't on the cutting edge? He was a little concerned.

I thought for a minute. This isn't a question one should just start talking about and see when their mouth takes them. I needed to absorb and breathe before answering. I'm still learning myself and I often need a refresher to confirm my understanding of systems.

It doesn't matter if you're a 21 year old university student learning C# from a book dated 2012, or a 45 year old senior engineer doing WinForms at a small company in the midwest. You want to make sure you are valuable, that your skills are appreciated, and that you'll be able to provide value at any company.

I told this young person to try not to focus on the syntax of C# and the details of the .NET Framework, and rather to think about the problems that it solves and the system around it.

This advice was .NET specific, but it can also apply to someone learning Rails 3 talking to someone who knows Rails 5, or someone who learned original Node and is now reentering the industry with modern JavaScript and Node 12.

Do you understand how your system talks to the file system? To the network? Do you understand latency and how it can affect your system? Do you have a general understanding of "the stack" from when your backend gets data from the database makes anglebrackets or curly braces, sends them over the network to a client/browser, and what that next system does with the info?

Squeezing an analogy, I'm not asking you to be able to build a car from scratch, or even rebuild an engine. But I am asking you for a passing familiarity with internal combustion engines, how to change a tire, or generally how to change your oil. Or at least know that these things exist so you can google them.

If you type Google.com into a browser, generally what happens? If your toaster breaks, do you buy a new toaster or do you check the power at the outlet, then the fuse, then call the neighbor to see if the power is out for your neighborhood? Think about systems and how they interoperate. Systems Thinking is more important than coding.

If your programming language or system is a magical black box to you, then I ask that you demystify it. Dig inside to understand it. Crack it open. Look in folders and directories you haven't before. Break things. Fix them.

Know what artifacts your system makes and what's needed for it to run. Know what kinds of things its good at and what it's bad at - in a non-zealous and non-egotistical way.

You don't need to know it all. In fact, you may dig in, look around inside the hood of a car and decide to take a ride-sharing or public transport the rest of your life, but you will at least know what's under the hood!

For the young person I spoke to, yes .NET Core may be a little different from .NET Framework, and they might both be different from Ruby or JavaScript, but strings are strings, loops are loops, memory is memory, disk I/O is what it is, and we all share the same networks. Processes and threads, ports, TCP/IP, and DNS - understanding the basic building blocks are important.

Drive a Honda or a Jeep, you'll still need to replace your tires and think about the road you're driving on, on the way to the grocery store.

What advice would you give to a young person who is not sure if what they are learning in school will serve them well in the next 10 years? Let us know in the comments.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

An experiment - The Azure Cloud Shell at the command line with AZ SHELL

May 7, 2019

Description:

I've blogged before about the Azure Cloud Shell. It's super cool and you can get your own easily in any browse by hitting https://shell.azure.com. You can have either bash or powershell, and you get a shared "cloud drive" that is persisted between sessions.

If you have Visual Studio Code you can get an Azure Cloud Shell integrated within VSCode by just installing Visual Studio Code and adding the Azure Account Extension.

I recently got a build of the new open source Windows Terminal on my machine and I set up some profiles with tabs for DOS, PowerShell, VS2019, Ubuntu but something was missing. Why can't I get my Azure Cloud Shell?

Sure, I can fire up a VM and ssh into it. But Azure Cloud Shell spins up a free container with a persistent cloud drive AND has a bunch of developer tools like python, node, dotnet, and go already installed. I'd love to use it! But it's not a VM and the container isn't exposed with SSH. Instead, we'll want to spin the Azure Cloud Shell up the same way the https://shell.azure.com site does, with web calls and web sockets. So...why not do it?

image

I thought I was pretty clever when I had this idea so I started a C# implementation myself. Then I talked to Anders Liu from work about how to do it right, and over the weekend he beat me to it with his own VERY nice and clean implementation in Go that he put on his github at https://github.com/yangl900/azshell. We shared this on an internal alias and found out that Noel Bundick had the same great idea and put it in his Az CLI extensions pack (which has a ton of other cool stuff you should see). Anders' is standalone and Noel's is an Az CLI extension.

Either way, we all together think this idea has merit and maybe it should be an official thing! What do you think? Regardless maybe it doesn't need to be since you can try it today with these open source options.

Just put "azshell.exe" in your PATH and make sure you have the latest Azure CLI installed and you're logged in.

By the way, you can also get a Cloud Shell inside the Portal. In fact there's a button for it at the top that looks like >_ Personally I think with the addition of "az shell" (or in this case, azshell.exe) from the command line) it completes the circle in a really cool way.

image

Let me know what you think in the comments!

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

A new Console for Windows - It's the open source Windows Terminal

May 3, 2019

Description:

"My fellow Windows users, our long national nightmare is over." The Windows Terminal is here, it's open source, it's real, and it's spectacular. It's very early days to be clear, but the new Windows Terminal is open source and it's up at https://github.com/microsoft/Terminal for you to check out.

The repository includes Windows Terminal The Windows console host (conhost.exe) - a local copy that is separate from the built-in Windows one.  Components shared between the two projects ColorTool Sample projects that show how to consume the Windows Console API

And even better, it'll be, as they say:

Windows Terminal will be delivered via the Microsoft Store in Windows 10 and will be updated regularly, ensuring you are always up to date and able to enjoy the newest features and latest improvements with minimum effort.

How do you get it? TODAY you clone the repo and build your own copy. There will be early builds in the Store this summer and 1.0 should be out before the end of the year.

As of today, the Windows Terminal and Windows Console have been made open source and you can clone, build, run, and test the code from the repository on GitHub: https://github.com/Microsoft/Terminal

This summer in 2019, Windows Terminal previews will be released to the Microsoft Store for early adopters to use and provide feedback.

This winter in 2019, our goal is to launch Windows Terminal 1.0 and we’ll work with the community to ensure it’s ready before we release!

So today, yes, it'll take some effort if you want to play with it today. But good things are worth a little effort. Here's some of the things I've done to mine. I hope you make your Windows Terminal your own as well!

Windows Terminal

When you click the menu, check out Settings, which will open your profile.json in your JSON editor. I use VS Code to edit. You'll need to run Format Document to make the JSON look nice as today it may show up on one line.

You can create color profiles in the "schemes" node. For example, here's my "UbuntuLegit" color theme in my profiles.json.

{
"name": "UbuntuLegit",
"foreground": "#EEEEEE",
"background": "#2C001E",
"colors": [
"#4E9A06", "#CC0000", "#300A24", "#C4A000",
"#3465A4", "#75507B", "#06989A", "#D3D7CF",
"#555753", "#EF2929", "#8AE234", "#FCE94F",
"#729FCF", "#AD7FA8", "#34E2E2", "#EEEEEE"
]
}

Here's an example profile with all the settings I know about set. This is for "CMD.exe"

"profiles": [
{
"startingDirectory": "C:/Users/Scott/Desktop",
"guid": "{7d04ce37-c00f-43ac-ba47-992cb1393215}",
"name": "DOS but not DOS",
"colorscheme": "Solarized Dark",
"historySize": 9001,
"snapOnInput": true,
"cursorColor": "#00FF00",
"cursorHeight": 25,
"cursorShape": "vintage",
"commandline": "cmd.exe",
"fontFace": "Cascadia Code",
"fontSize": 20,
"acrylicOpacity": 0.85,
"useAcrylic": true,
"closeOnExit": false,
"padding": "0, 0, 0, 0",
"icon": "ms-appdata:///roaming/cmd-32.png"
},

I like the "vintage" cursor and I make it bright green. I can also add icons in this location:

%LOCALAPPDATA%\packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\RoamingState

So I put some 32x32 PNGs in that folder and then I can reference them as seen above with ms-appdata://

Cool Icons

I'll go into more detail about what's happening in each of these profiles/tabs in the next post! I've got a few creative ideas for taking MY Windows Terminal to the next level.

"defaultProfile": "{7d04ce37-c00f-43ac-ba47-992cb1393215}",
"initialRows": 30,
"initialCols": 120,
"alwaysShowTabs": true,
"showTerminalTitleInTitlebar": true,
"experimental_showTabsInTitlebar": true,
"requestedTheme": "dark",

Here I've set the theme to dark using "requestedTheme" even though I run Windows in a light theme. I'm setting the tabs to be shown all the time and moved the tabs into the TitleBar.

Here's my Ubuntu tab with the UbuntuLegit color theme above:

Nice Ubuntu Colors

Notice I'm also using Powerline in my prompt. I'm using Fira Code which has the glyphs I need but you can certainly use patched Powerline fonts or make your own fonts with tools like those from Nerd Fonts and it's font patcher. This font patcher is often used to take your favorite monospace font and add Powerline glyphs to it.

NOTE: If you see any weird spacing issues with glyphs you might try using --use-single-width-glyphs to work around it. By release all these little issues I assume will be worked out. I had no issues with Fira Code in my case, your mileage may vary.

This new Windows Terminal is great. As mentioned, it's super early days but it's amazingly fast, runs on your GPU (the current conhost runs on your CPU) and it's VERY configurable.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.
© 2018 Scott Hanselman. All rights reserved.
     

Did I leave the garage door open? A no-code project with Azure IoT Central and the MXChip DevKit

May 1, 2019

Description:

Azure IoT DevKitFor whatever reason when a programmer tries something out for the first time, they write a "Hello World!" application. In the IoT (Internet of Things) world of devices, it's always fun to make an LED blink as a good getting started sample project.

When I'm trying out an IoT platform or tiny microcontroller I have my own "Hello World" project - I try to build a simple system that tells me "Did I leave the garage door open?"

I wanted to see how hard it would be to use an Azure IoT MXChip DevKit to build this little system. The DevKit is small and thin but includes Wifi, OLED display, headphone, microphone, sensors like temperature, humidity, motion, pressure sensors. The kit isn't super expensive given all it does and you can buy it most anywhere. The DevKit is also super easy to update and it's actively developed. In fact, I just updated mine to Firmware 1.6.2 yesterday and there is an Azure IoT Device Workbench Extension for VS Code. There is also a fantastic IoT DevKit Project Catalog you should check out.

I wanted to use this little Arduino friendly device and have it talk to Azure. My goal was to see how quickly and simply I could make a solution that would:

Detect if my garage door is open If it's open for more than 4 minutes, text me Later, perhaps I'll figure out how to reply to the Text or take an action to close the door remotely.

However, there is an Azure IoT Hub and there's Azure IoT Central and this was initially confusing to me. It seems that Azure IoT Hub is a individual Azure service but it's not an end-to-end IoT solution - it's a tool in the toolbox. Azure IoT Central, on the other hand, is an browser-based system with templates that is a SaaS (Software as a Service) and hides most of the underlying systems. With IoT Central no coding is needed!

Azure IoT Central: What is Azure IoT Central? IoT solution accelerators: What are Azure IoT solution accelerators? IoT Hub: What is Azure IoT Hub?

Slick. I was fully prepared to write Arduino code to get this garage door sensor working but if I can do it with no code, rock on. I may finish this before lunch is over. I have an Azure account so I went to https://azureiotcentral.com and created a new Application. I chose Pay as You Go but it's free for the first 5 devices so, swag.

Create a New Azure IoT Central App

You should totally check this out even if you don't have an IoT DevKit because you likely DO have a Raspberry Pi and it totally has device templates for Pis or even Windows 10 IoT Core Devices.

Azure IoT Central

Updating the firmware for the IoT DevKit couldn't be easier. You plug it into a free USB port, it shows up as a disk drive, and you drag in the new (or alternate) firmware. If you're doing something in production you'll likely want to do OTA (Over-the-air) firmware updates with Azure IoT Hub automatic device management, so it's good to know that's also an option. The default DevKit firmware is fun to explore but I am connecting this device to Azure (and my Wifi) so I used the firmware and instructions from here which is firmware specific to Azure IoT Central.

The device reboots as a temporary hotspot (very clever) and then you can connect to it's wifi, and then it'll connect to yours over WPA2. Once you're connected to Wifi, you can add a new Real (or Simulated - you can actually do everything I'm doing where without a real device!) device using a Device ID that you'll pair with your Mxchip IoT DevKit. After it's connect you'll see tons of telemetry pour into Azure. You can, of course, choose what you want to send and send just the least amount your projects needs, but it's still a very cool first experience to see temp, humidity, and on and on from this little device.

MxChip in Azure

Here's a wonderful HIGH QUALITY diagram of my Garage door planned system. You only wish your specifications were this sophisticated. ;)

Basically the idea is that when the door is closed I'll have the IoT DevKit taped to the door with a battery, then when it open it'll rotate 90 degrees and the Z axis of the Accelerometer will change! If it stays there for more than 5 minutes then it should text me!

image

In Azure IoT central I made a Device Template with a Telemetry Rule that listens to the changes in the accelerometer Z and when the average is less than 900 (I figured this number out by moving it around and testing) then it fires an Action.

The "Action" is using an Azure Monitor action group that can either SMS or even call me voice!

In this chart when the accelerometer is above the line the garage door is closed and when it drops below the line it's open!

The gyroscope Z changing with time

Here's the Azure Monitoring alert that texts me when I leave the garage door open too long.

Azure Activity Monitor

And here's my alert SMS!

mxchip

I was very impressed I didn't have to write any code to pull this off. I'm going to try this same "Hello World" later with custom code using a AdaFruit Huzzah Feather and an ADXL345 Accelerometer. I'll write Arduino C code and still have it talk to Azure for Alerts.

It's amazing how clean and simple the building blocks are for projects like this today.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Software Defined Radio is a great way to bridge the physical and the digital and teach STEM

Apr 25, 2019

Description:

Software Defined Radio AdapterOne of the magical technologies that makes an Open Source Artificial Pancreas possible is "Software-defined Radio" or SDR. I have found that SDR is one of those technologies that you've either heard of and agree it's amazing or you've literally never heard of it. Well, buckle up, friends

There's an amazing write up by Pete Schwamb, one of the core members of the community who works on Loop full time now, on how Software Defined Radios have allowed the community to "sniff" the communication protocols of insulin pumps in the RF spectrum and reverse engineer the communications for the Medtronic and now Omnipod Eros Insulin Pumps. It's a fascinating read that really illustrates how you just need the right people and a good cause and you can do anything.

In his post, Pete explains how he configured the SDR attached to his computer to listen into the 433MHz range and capture the RF (radio frequencies) coming to and from an insulin pump. He shows how the shifts between a slightly higher and slightly lower frequency is used to express 1s and 0s, just like a high voltage is a 1 and a low or no voltage is a 0.

Radio Frequency to 1s and 0s

Then he gets a whole "packet," plucks it out of the thin air, and then manipulates it from Python. Insert Major Motion Picture Programmer Montage and a open source pancreas pops out the other side.

1s and 0s from RF into a string in Python

Lemme tell you, Dear Reader, Hello World is nice, but pulling binary data out of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light is THE HOTNESS.

From a STEM perspective, SDR is more fun than Console Apps when educating kids about the world and it's a great way to make the abstract REAL while teaching programming and science.

You can get a SDR kit for as little as US$20 as a USB device. They are so simple and small it's hard to believe they work at all.

Just plug it in and download Airspy (Formerly SDRSharp, there are many choices in the SDR space). and run the install-rtlsdr.bat to setup a few drivers.

You'll want to run zadig.exe and change the default driver for listening to radio (FM, TV) over to something more low-level. Run it, select "List All Interfaces," and select "Bulk Interface 0"

Updating SDR wtih Zadig

After you hit Replace Driver with WinUSB, you can close this and run SDRSharp.exe.

I've set my SDRSharp to WFM (FM Radio) and turned the Gain up and OMG it's the radio.

Listening to the Radio with SDR

In this pic I'm listening to 91.5 FM in Portland, Oregon which is National Public Radio. The news is the center red line moving down, while the far right is 92.3, a rock station, and 90.7 on the far left is more jazz. You can almost see it!

AdaFruit has as great SDR tutorial and I'll use it to find the local station for National Weather Radio. This is the weather alert that is available anywhere here in America. Mine was Narrow Band (WFM) at 162.550 FM! It was harder to hear but it was there when I turned up the gain.

The weather report

But wait, it's more than radio, it's the whole spectrum!

Here I am sending a "Get Pump Model" command to my insulin pump in the 900Mhz range! The meaty part is in the red.

Talking to an Insulin Pump

Here's the heartbeat and requests that are sent to my Insulin Pump from my Loop app through a RileyLink (BT to RF Bridge). I'm seeing the Looping communications of my Open Source Artificial Pancreas here, live.

Watching RF Pump Communications

Next post or two I'll try to get the raw bits off of the RF signal of something interesting. If you haven't messed with SDR you should really give it a try! As I said before you can get a SDR kit for as little as US$20 as a USB device.

Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!
© 2018 Scott Hanselman. All rights reserved.
     

Open Source Artificial Pancreases will become the new standard of care for Diabetes in 2019

Apr 23, 2019

Description:

Loop is an open source pancreas for iPhoneI've been a Type 1 diabetic for over 25 years. Diabetes sucks. They actually give you an award for staying alive for years on insulin. Diabetics don't usually die of old age, they die of heart disease or stroke, kidney failure, and while they're at it they may go blind, get nerve damage, amputation, and a bunch of other stuff. It used to be a death sentence but when insulin was introduced as a treatment in 1921, there was a chance for something new.

The idea is if you keep your blood sugars close to normal - if you can simulate your non-working pancreas - you'll get hit by an ice cream truck! At least, that's how I hope I go. :)

Early on it was boiling big gauge steel needles and pork insulin to dose, and peeing on a stick to get a sense of sugar levels. Then it was a dozen finger pricks a day and a half dozens manual shots with a syringe. Then it was inserted continuous glucose meters and insulin pumps that - while not automatic - mean less invasive treatment and greater control.

Today, we are closing the loop. What's the loop? It's this:

Consider my glucose levels, what I'm about to eat, and what I'm about to to (and dozens of other environmental factors) Dose myself with insulin GOTO 1. Every few hours, or every few minutes, depending on the situation.

I do that. Manually. Every diabetic does, and the mental pressure - the intense background psychic weight of it all - is overwhelming. We want to lower the cognitive load of diabetes. This is a disease where you may not live as long if you're not good at math. Literally. That's unfair.

The community is "looping" by allowing an algorithm to make some of those decisions for me.

I've personally been looping with an open source artificial pancreas for over two years. It's night and day from where I started with finger sticks and a half dozen needle sticks a day. It's not perfect, it's not automatic, but Open Source Pancreas are "Tesla autopilot for diabetes." It doesn't always park the car right or stop at every stop light, but it works very hard to keep me in-between the lines and going straight ahead and now that I have it, I can't imagine living without it.

I sleep through the night while my Loop makes tiny adjustments every five minutes to keep my sugars as flat as possible. I don't know about you but my pancreas sits on my nightstand.

It's happening and it can't be stopped

Seven years ago I wrote about The Sad State of Diabetes Technology in 2012. Three years ago The Promising State of Diabetes Technology in 2016 and last year The Extremely Promising State of Diabetes Technology in 2018. There's a great comment from the first blog post in 2012 where Howard Loop shared his frustration with the state of things. Unlike most commenters on the Internet, amazingly Howard took action and started the Tidepool Organization! Everything in his comment from 7 years ago is happening.
Great article, Scott. You've accurately captured the frustration I've felt since my 12 year old daughter was diagnosed with T1D nine months ago. She also wears a pump and CGM and bravely performs the ritual you demonstrate in your video every three days. The technology is so retro it's embarrassing.

It's 2019 and things are really looking up. The open source DIY diabetes community is thriving. There are SEVERAL open pancreas systems to choose from and there's constant innovation happening with OpenAPS and Loop/LoopKit.

OpenAPS runs on devices like Raspberry Pi Zeros and is a self-contained pancreas with the communications and brain/algorithm all on the main device. Loop runs on an iPhone and uses a "RileyLink" devices that bridges the RF (Radio Frequency) insulin pump communications with modern Bluetooth.

The first bad part is I am running a 15 year old out of warranty cracked insulin pump I bought on Craigslist. Most new pumps are locked down, and my old pump is the last version that supported remote control. However, the Loop open source project announced support for a second pump this week, the OmniPod Eros. This is the first time an "in warranty" pump has been supported and it also proves the larger point made by the diabetes community. We Are Not Waiting. We want open choice and open data and open choices that put us in control.

Read about the history of Loop by original developer Nate Racklyeft. As he points out, a thing like Loop or OpenAPS is the result of a thousand little steps and innovation by countless community members who are so generous with their time.

The first system to run it was a Raspberry Pi; the code was a series of plugins, written with the help of Chris Hannemann, to the openaps toolkit developed by Ben West in collaboration with Dana Lewis and Scott Leibrand. I’m still in awe of the elegant premise in Ben’s design: a system of repeatable, recordable, and extendable transform commands, all backed by Git. The central plugin of the toolkit is decocare: Ben’s 5-year magnum opus, a reverse-engineered protocol of the Minimed Carelink USB radio to command insulin pumps.

There's an amazing write up by Pete Schwamb, one of the core members of the community who works on Loop full time now,  on how Software Defined Radios have allowed the community to "sniff" the communication protocols of insulin pumps in the RF spectrum and reverse engineer the communications for the Medtronic and now Omnipod Eros Insulin Pumps. It's a fascinating read that really illustrates how you just need the right people and a good cause and you can do anything.

You can watch my video presentation "Solving Diabetes with an Open Source Artificial Pancreas" where I offer an overview of the problem, a number solutions offered over the year, and two open source pancreas options in the form of LoopKit and OpenAPS.

The community members and organizations like Tidepool and the Nightscout Foundation are working with the FDA to take projects and concepts like an open source pancreas system from a threat based on years of frustration to a bright future based on mutual collaboration!

In March, 2018, the FDA announced a de novo iCGM (integrated CGM) designation. A de novo designation is the FDA process for creating new device classifications, in this case moving qualifying CGMs from Class-III, the highest FDA risk classification, to Class-II with Special Controls. The first CGM to get this designation is the Dexcom G6.

Diabetic Xbox AvatarWhat does this mean? It means the FDA is willing to classify continuous glucose meters in a formal way that paves a path towards interoperable devices. Today we hack devices to build these Loops with out-of-warranty pumps. We are doing this utterly on our own. It can take months to collect the equipment needed, get ancient pumps on the gray market, compile the software yourself - which is a huge hurdle for the non-technical.

Imagine a future where someone could buy a supported and in-warranty "iPump," download an officially supported app or package, and start looping! We could have world of open and interoperable devices and swappable algorithms.

In October of 2018 the non-profit Tidepool organization announced its intent to deliver the Loop app as a supported and FDA-regulated mobile app in the Apple App Store! This is happening, people but we are just getting started.

To learn more, start reading.

Loop - https://loopkit.github.io/loopdocs/ OpenAPS - https://openaps.org/ Tidepool - https://www.tidepool.org/

Also, if you're diabetic, consider buying a Nightscout Xbox Avatar accessory so you can see yourself represented while you game!

Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!


© 2018 Scott Hanselman. All rights reserved.
     

Exploring DNS with the .NET Core based Technitium DNS Server

Apr 18, 2019

Description:

Earlier this week I talked about how Your Computer is not a Black Box and I spent some time in TCPView and at the command line exploring open ports on my computer. I was doing this in order to debug an issue with a local DNS server I was playing with, so I thought I'd take a moment and look at that server itself.

The Technitium DNS Server is a personal local DNS server (FOSS on GitHub) written in C# and it runs on Windows, macOS, Linux, Raspberry Pi, etc. I downloaded the Portable app.

For Windows folks who aren't used to .tar.gz files, remember to "eXtract Zie Files!" with "tar -xzvf DnsServerPortable.tar.gz -C ./TechnitiumDNS/" and it's also worth reminding you all that tar.exe, curl.exe, wget.exe and more are all included in Windows 10 and have been since 2017. If that's too hard, use 7zip.

Technitium DNS is pretty cool, you just unzip/tar it and run start.sh or start.bat and it "just works." Of course, I did have a process already on port 53 - DNS - so I did a little debugging, but that was my fault.

Here's the local web UI that you can use to administer the server locally. You can forward to whatever upstream DNS server you'd like, with the added bonus that the forwarder can be DNS over HTTPS so you can use things like CloudFlare, Google, or Cloud9. Using DNS over HTTPS means your DNS lookups can be secured with DNSSEC and are far more secure and private than regular DNS over UDP/TCP.

Technitium also includes support for DNS Sinkholes (similar to how I use my Pi-Hole) and Block List URLs. It'll automatically download block lists daily and block ads.

Technitium is a lovely .NET Core based DNS Server

It's also educational to try running your own DNS server and it's fun to read the code! The code for Technitium's DNS Server is up at https://github.com/TechnitiumSoftware/DnsServer and is super interesting from a networking perspective, but also from an C# perspective. It's a very interesting example of some .NET Core code at a very low level and I'm thrilled that it works on every operating system.

There's even bash scripts for setting Technitium up on your RaspberryPi or Ubuntu to make it easy. If you are using Windows and don't care about .NET Core you can use the .NET that's included with Windows and Technitum has a Tray app and Installer as well.

Some of the code isn't "idiomatic" C#/.NET Core but it's interesting to read about. The main DnsWebService.cs is pretty intense as it doesn't use any ASP.NET Core routing or primitives. It's a complete webserver written using only System.Net and its own support libraries, along with some of the lower-level Newtonsoft.Json libraries.

The main DnsServer is also quite low level and very performant. It lives in DnsServer.cs. It opens up n sockets (depending on how many ports you bind to) and starts accepting connections here. DNS Datagrams start getting parsed here, right off the stream. The supporting libraries and networking helper code lives over at https://github.com/TechnitiumSoftware/TechnitiumLibrary which is a wealth of interesting and useful code covering BitTorrent, Mail, and Firewall management. There's a ton of OO representations of networking concepts, and all the DNS records are parsed manually.

Technitium has a DNS Server, client, Mac Address Changer, and open source instant messenger. The developer is extremely prolific. They even host a version of "Get HTTPS for free" that works with Windows and makes getting Let's Encrypt certificates super easy.

Anyway, I've been enjoying exploring DNS again and reminding myself not only that it still works great (since I learned about DNS from sniffing packets in networking class) and it's been updated and improved with caches, DNSSEC, DNS over HTTP and more in the years following.

Here I've set my IPv4 DNS to 127.0.0.1 and my IPv6 DNS to ::1, then I run NSLookup and try some domain lookups.

Looking up domains at the command line with nslookup

Again, to be clear, the local DNS server took these lookups and then forwarded them upstream to another server. However, you have the choice for your upstream lookups to be done over whatever protocols you want, you can use Google, OpenDNS, Quad9 (with DNSSEC or without), and on and on.

Are you running your own DNS Server?

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Your computer is not a black box - Understanding Processes and Ports on Windows by exploring

Apr 16, 2019

Description:

TCPViewI did a blog post many years ago reminding folks that The Internet is not a Black Box. Virtually nothing is hidden from you. The same is true for your computer, whether it runs Linux, Mac, or Windows.

Here's something that happened today at lunch. I was testing a local DNS Server (more on this on Thursday) and I started it up...and it didn't work.

In order to test a DNS server on Windows, you can go to the command line and run "nslookup" then use the command "server 1.1.1.1" where 1.1.1.1 is the DNS server you'd like to try out. Go ahead and try it now. Run cmd.exe or powershell.exe and then run "nslookup" and then type any domain name. You should get an IP address.

Given that I was trying to run a DNS Server on localhost:53 (Port 53 is where DNS usually hangs out, just like Port 80 is where Web Servers (HTTP) hang out and 443 is where Secured Web Servers (HTTPS) usually are) I should be able to do this. I'm trying to send DNS requests to localhost:53

C:\Users\scott> nslookup
Default Server: pihole
Address: 192.168.151.6

> server 127.0.0.1
Default Server: localhost
Address: 127.0.0.1

> hanselman.com
Server: localhost
Address: 127.0.0.1

*** localhost can't find hanselman.com: No response from server
> hanselman.com

Weird, that didn't work. Let me try a DNS Server I know works like Google's 8.8.8.8 public DNS

> server 8.8.8.8
Default Server: google-public-dns-a.google.com
Address: 8.8.8.8

> hanselman.com
Server: google-public-dns-a.google.com
Address: 8.8.8.8

Non-authoritative answer:
Name: hanselman.com
Address: 206.72.120.92

Ok, it seems my local DNS isn't listening on point 53. Checking the logs of the Technitium local DNS server shows this:

[2019-04-15 23:26:31 UTC] [0.0.0.0:53] [UDP] System.Net.Sockets.SocketException (10048): Only one usage of each socket address (protocol/network address/port) is normally permitted
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at DnsServerCore.DnsServer.Start() in Z:\Technitium\Projects\DnsServer\DnsServerCore\DnsServer.cs:line 1234
[2019-04-15 23:26:31 UTC] [0.0.0.0:53] [TCP] DNS Server was bound successfully.
[2019-04-15 23:26:31 UTC] [[::]:53] [UDP] DNS Server was bound successfully.
[2019-04-15 23:26:31 UTC] [[::]:53] [TCP] DNS Server was bound successfully.

The DNS Server's process is trying to bind to TCP:53 and UDP:53 using IPv4 (expressed as localhost with 0.0.0.0:53) and then TCP:53 and UDP:53 using IPv6 (expressed as localhost using [::]:53) but it seems like the UDP binding to port 53 on IPv4 failed. Weird.

Someone else is listening in on Port 53 localhost via IPv4.

That's weird. How can we find out what ports are open locally?

I can run "netstat" and ask Windows for a list of all TCP/IP connections and the processes that are listening on which ports. I'll also PIPE the results to "clip" which will put it in the clipboard automatically. Then I can look at it in a text editor (or I could pipe it through find or findstr).

You can run netstat --help to get the right arguments. I've asked it to tell me the process IDs and all the details it can.

Active Connections
Proto Local Address State PID

TCP 0.0.0.0:53 LISTENING 27456
[dotnet.exe]

UDP 0.0.0.0:53 LISTENING 11128
[svchost.exe]

TCP [::]:53 *:* 27456
[dotnet.exe]

UDP [::]:53 *:* 27456
[dotnet.exe]

Hm, a service is already listening on port 53. I'm running Windows 10, not a Server so it's odd there's already a DNS listener on port 53.

I wonder what service is it?

I can check the Services Tab of the Task Manager and sort by PID. Or can I run "tasklist" and ask directly.

C:\WINDOWS\system32>tasklist /svc /fi "pid eq 11128"

Image Name PID Services
========================= ======== ============================================
svchost.exe 11128 SharedAccess

That's Internet Connection Sharing, and it's used by Docker and other apps for NAT translation and routing. I can shut it down with the sc (service control) or with "net stop."

C:\WINDOWS\system32>net stop sharedaccess
The Internet Connection Sharing (ICS) service is stopping.
The Internet Connection Sharing (ICS) service was stopped successfully.

Now I can start my DNS Server again (it's written in .NET Core) and I can see with tcpview.exe that it's listening on all appropriate ports.

TCPView showing everything on Port 53

In conclusion, it's a good reminder to refresh yourself on the basics of IPv4, IPv6, how processes talk to/allocate ports, what Process IDs (PIDs) are, and their relationships. Much of this is taught in computer science university courses but if you're self taught or not doing low level work every day it's easy to forget.

Virtually nothing on your computer is hidden from you!

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Blocking ads before they enter your house at the DNS level with pi-hole and a cheap Raspberry Pi

Apr 11, 2019

Description:

image

Lots of folks ask me about Raspberry Pis. How many I have, what I use them for. At last count there's at least 22 Raspberry Pis in use in our house.

One runs our dakboard family dashboard that we built in a weekend but use every day. We have at 3 that are set up for retrogaming - one in a 3d printed Gameboy (A pi-grrl, in fact), one in a X-Arcade Tankstick, one in a tiny laser-cut arcade case for the desktop. I have a Raspberry Pi that runs one of my 3D Printers running Octoprint. This one also has as camera and does time-lapse videos of my 3D prints. We have another 3 that run little robots my sons and I have built 6 are running in a local Kubernetes Cluster These 6 Pis are my personal cloud, so maybe there's 16 Pis in the house and one Pi Cloud/Cluster. One is an internet radio in the 13 year old's room running PiMusicBox. One is a touchscreen tablet the 11 year old uses for Scratch. Imagine a Linux iPad. One runs Kodi as an entertainment center in the kids' play room. One lives in a CrowPi that we use for experiments and .NET Core remote debugging. Another three are Raspbery Pi Zero Ws for various experiments with one Pi Zero W acting as as backup Open Source Artificial Pancreas. and most recently one is a Pi-hole. A Black hole that eats tracking cookies, advertising, and other bad stuff. See also "shut your pie hole." AKA that place you put pie.

A Pi-hole is a Raspbery Pi appliance that takes the form of an DNS blocker at the network level. You image a Pi, set up your network to use that Pi as a DNS server and maybe white-list a few sites when things don't work.

I was initially skeptical, but I'm giving it a try. It doesn't process all network traffic, it's a DNS hop on the way out that intercepts DNS requests for known problematic sites and serves back nothing.

Installation is trivial if you just run unread and untrusted code from the 'net ;)

curl -sSL https://install.pi-hole.net | bash

Otherwise, follow their instructions and download the installer, study it, and run it.

I put my pi-hole installation on the metal, but there's also a very nice Docker Pi-hole setup if you prefer that. You can even go further, if, like me, you have Synology NAS which can also run Docker, which can in turn run a Pi-hole.

Within the admin interface you can tail the logs for the entire network, which is also amazing to see. You think you know what's talking to the internet from your house - you don't. Everything is logged and listed. After installing the Pi-hole roughly 18% of the DNS queries heading out of my house were blocked. At one point over 23% were blocked. Oy.

NOTE: If you're using an Amplifi HD or any "clever" router, you'll want to change the setting "Bypass DNS cache" otherwise the Amplifi will still remain the DNS lookup of choice on your network. This setting will also confuse the Pi-hole and you'll end up with just one "client" of the Pi-hole - the router itself.

For me it's less about advertising - especially on small blogs or news sites I want to support - it's about just obnoxious tracking cookies and JavaScript. I'm going to keep using Pi-hole for a few months and see how it goes. Do be aware that some things WILL break. Could be a kid's iPhone free-to-play game that won't work unless it can download an add, could be your company's VPN. You'll need to log into http://pi.hole/admin (make sure you save your password when you first install, and you can only change it at the SSH command line with "pihole -a -p") and sometimes disable it for a few minutes to test, then whitelist certain domains. I suspect after a few weeks I'll have it nicely dialed in.

Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download at https://datalust.co/seq.
© 2018 Scott Hanselman. All rights reserved.
     

Accessibility Insights for the Web and Windows makes accessibility even easier

Apr 9, 2019

Description:

Accessibility InsightsI recently stumbled upon https://accessibilityinsights.io. There's both a Chrome/Edge extension and a Windows app, both designed to make it easier to find and fix accessibility issues in your websites and apps.

The GitHub for the Accessibility Insights extension for the web is at https://github.com/Microsoft/accessibility-insights-web and they have three trains you can get on:

Canary (released continuously) Insider (on feature completion) Production (after validation in Insider)

It builds on top of the Deque Axe core engine with a really fresh UI. The "FastPass" found these issues with my podcast site in seconds - which kind of makes me feel bad, but at least I know what's wrong!

However, the most impressive visualization in my opinion was the Tab Stop test! See below how it draws clear numbered line segments as you Tab from element. This is a brilliant way to understand exactly how someone without a mouse would move through your site.

I can easily see what elements are interactive and what's totally inaccessible with a keynote! I can also see if the the tab order is inconsistent with the logical order that's communicated visually.

Visualized Tab Stops as numbered points on a line segment that moves through the DOM

After the FastPass and Tab Visualizations, there's an extensive guided assessment that walks you through 22 deeper accessibility areas, each with several sub issues you might run into. As you move through each area, most have Visual Helpers to help you find elements that may have issues.

Checking for accessible elements on a web site

After you're done you and export your results as a self-contained HTML file you can check in and then compare with future test results.

There is also an Accessibility Insights for Windows if I wanted to check, for example, the accessibility of the now open-source Windows Calculator https://github.com/Microsoft/calculator.

It also supports Tab Stop visualization and is a lot like Spy++ - if you remember that classic developer app. There were no Accessibility issues with Calculator - which makes sense since it ships with Windows and a lot of people worked to make it Accessible.

Instead I tried to test Notepad2. Here you can see it found two elements that can have keybook focus but have no names. Even cooler, you can click "New Bug" and it will create a new accessibility bug for you in Azure DevOps.

Test Results for Windows apps being checked for accessibility

The Windows app is also open source and up at https://github.com/Microsoft/accessibility-insights-windows for you to explore and file issues! There's also excellent developer docs to get you up to speed on the organization of the codebase and how each class and project works.

You can download both of these free open source Accessibility Tools at https://accessibilityinsights.io and start testing your websites and apps. I have some work to do!

Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download at https://datalust.co/seq.


© 2018 Scott Hanselman. All rights reserved.
     

Coders: Context Switching is hard for both computers and relationships

Apr 4, 2019

Description:

Coders: The Making of a New Tribe and the Remaking of the World

Clive Thompson is a longtime contributing writer for the New York Times Magazine and a columnist for Wired and now has a new book out called "Coders."

"Along the way, Coders thoughtfully ponders the morality and politics of code, including its implications for civic life and the economy. Programmers shape our everyday behavior: When they make something easy to do, we do more of it. When they make it hard or impossible, we do less of it."

I'm quoted in the book and I talk about how I've struggled with context-switching.

Here is TechTarget's decent definition of Context Switching:

A context switch is a procedure that a computer's CPU (central processing unit) follows to change from one task (or process) to another while ensuring that the tasks do not conflict. Effective context switching is critical if a computer is to provide user-friendly multitasking.

However, human context switching is the procedure we all have to go through to switch from "I am at work" mode to "I am at home" mode. This can be really challenging for everyone, no matter their job or background, but I propose for certain personalities and certain focused jobs like programming it can be even worse.

Quoting Clive from an ArsTechnica article where he mentions my troubles, emphasis mine:

One of the things that really leapt out is the almost aesthetic delight in efficiency and optimization that you find among software developers. They really like taking something that's being done ponderously, or that's repetitive, and optimizing it. Almost all engineering has focused on making things run more efficiently. Saving labor, consolidating steps, making something easier to do, amplifying human abilities. But it also can be almost impossible to turn off. Scott Hanselman talks about coding all day long and coming down to dinner. The rest of the family is cooking dinner and he immediately starts critiquing the inefficient ways they're doing it: "I've moved into code review of dinner."

Ordinarily a good rule of thumb on the internet is "don't read the comments." But we do. Here's a few from that ArsTechnica thread that are somewhat heartening. It sucks to "suffer" but there's a kind of camaraderie in shared suffering.

With reference to "Scott Hanselman talks about coding all day long and coming down to dinner. The rest of the family is cooking dinner and he immediately starts critiquing the inefficient ways they're doing it: "I've moved into code review of dinner.""

Wow, that rings incredibly true.

That's good to hear. I'm not alone!

I am not this person. I have never been this person.
Then again, I'm more of a hack than hacker, so maybe that's why. I'm one of those people who enjoys programming, but I've never been obsessed with elegance or efficiency. Does it work? Awesome, let's move on.

That's amazing that you have this ability. For some it's not just hard to turn off, it's impossible and it can ruin relationships.

When you find yourself making "TODO" and "FIXME" comments out loud, it's time to take a break. Don't ask me how I know this.

It me.

Yep, here too 2x--both my wife and I are always arguing over the most efficient way to drive somewhere. It's actually caused some serious arguments! And neither one of us are programmers or in that field. (Although I think each of us could have been.)
From the day I was conscious I've been into bin packing and shortest path algorithms--putting all the groceries up in the freezer even though we bought too much--bin packing. Going to that grocery store and back in peak traffic--shortest path. I use these so often and find such sheer joy in them that it's ridiculous, but hey, whatever keeps me happy.

This is definitely a thing that isn't programmer-specific. Learning to let go and to accept that your partner in life would be OK without you is an important stuff. My spouse is super competent and I'm sure could reboot the router without me and even drive from Point A to Point B without my nagging. ;)

However we forget these things and we tend to try and "be helpful" and hyper-optimize things that just don't need optimizing. Let it go. Let people just butter their damn bread the way they like. Let them drive a mile out of the way, you'll still get there. We tend to be ruder to our partners than we would be to a stranger.

That’s part of the reason why I’m now making all dinners for my family ;-)

LOL, this is also a common solution. Oh, you got opinions? Here's the spatula!

What do YOU think? How do you context switch and turn work off and try to be present for your family?

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

The Transitive Property of Friendship - and the importance of the Warm Intro

Apr 2, 2019

Description:

Too many LinkedIn invitationsPer Wikipedia, "In mathematics, a binary relation ... is transitive if ... element a is related to an element b and b is related to an element c then a is also related to c."

Per Me, if I am cool with you, and you are cool with your friend, then I'm cool with your friend. I've decided this is The Transitive Property of Friendship.

As I try to mentor more and more people and help folks Level Up in tech, I'm realizing how important it is to #BeTheLuck for someone else. This is something that YOU can do - volunteer at local schools, forward that resume for your neighbor, give a Warm Intro to a friend of a friend.

A lot of one's success can be traced back to hard work and being prepared for opportunities to present themselves, but also to Warm Intros. Often you'll hear about someone who worked hard in school, studied, did well, but then got a job because "their parent knew a person who worked at x." That's something that is hard to replicate. For under-represented folks trying to break into tech, for example, it's the difference between your resume sitting in a giant queue somewhere vs. sitting on the desk of the hiring manager. Some people inherit a personal network and a resume can jump to the top of a stack with a single phone call, while others send CV after CV with nary a callback.

This is why The Warm Intro is so important. LinkedIn has tried to replicate this by allowing you to "build your professional network" but honestly, you can't tell if I'm cool with someone on LinkedIn just because they're connected to me. Even Facebook "friends" have changed the definition of friend. It certainly has for me. Now I'm mentally creating friend categories like work colleague, lowercase f friend, Uppercase F Friend, etc.

Here's where it gets hard. You can't help everyone. You also have to protect yourself and your own emotional well-being. This is where cultivating a true network of genuine friends and work colleagues comes in. If your First Ring of Friends are reliable, kind, and professional, then it's safer to assume that anyone they bring into your world has a similar mindset. Thus, The Transitive Property of Friendship - also know as "Any friend of Scott's is a friend of mine." The real personal network isn't determined by Facebook or LinkedIn, it's determined by your gut, your experiences, and your good judgment. If you get burned, you'll be less likely to recommend someone in the future.

I've been using this general rule to determine where and when to spend my time while still trying to Lend my Privilege to as many people as possible. It's important also to not be a "transactional networker." Be thoughtful if you're emailing someone cold (me or otherwise). Don't act like an episode of Billions on Showtime. We aren't keeping score, tracking favors, or asking for kickbacks. This isn't about Amazon Referral Money or Finder's Fees. When a new friend comes into your life via another and you feel you can help, give of your network and time freely. Crack the door open for them, and then let them kick it open and hopefully be successful.

All of this starts by you - we - building up warm, genuine professional relationships with a broad group of people. Then using that network not just for yourself, but to lift the voices and careers of those that come after you.

What are YOUR tips and thoughts on building a warm and genuine personal and professional network of folks?

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Displaying your realtime Blood Glucose from NightScout on an AdaFruit PyPortal

Mar 29, 2019

Description:

file-2AdaFruit makes an adorable tiny little Circuit Python IoT device called the PyPortal that's just about perfect for the kids - and me. It a little dakBoard, if you will - a tiny totally programmable display with Wi-Fi and  lots of possibilities and sensors. Even better, you can just plug it in over USB and edit the code.py file directly on the drive that will appear. When you save code.py the device soft reboots and runs your code.

I've been using Visual Studio Code to program Circuit Python and it's become my most favorite IoT experience so far because it's just so easy. The "Developer's Inner Loop" of code, deploy, debug is so fast.

As you may know, I use a Dexcom CGM (Continuous Glucose Meter) to manage my Type 1 Diabetes. I feed the data every 5 minutes into an instance of the Nightscout Open Source software hosted in Azure. That gives me a REST API to my own body.

I use that REST API to make "glanceable displays" where I - or my family - can see my blood sugar quickly and easily.

I put my blood sugar in places like:

my git prompt the color of my keyboard keys Siri and Alexa DakBoard family wall mounted dashboards

And today, on a tiny PyPortal device. The code is simple, noting that I don't speak Python, so Pull Requests are always appreciated.

import time
import board
from adafruit_pyportal import PyPortal

# Set up where we'll be fetching data from
DATA_SOURCE = "https://NIGHTSCOUTWEBSITE/api/v1/entries.json?count=1"
BG_VALUE = [0, 'sgv']
BG_DIRECTION = [0, 'direction']

RED = 0xFF0000;
ORANGE = 0xFFA500;
YELLOW = 0xFFFF00;
GREEN = 0x00FF00;

def get_bg_color(val):
if val > 200:
return RED
elif val > 150:
return YELLOW
elif val < 60:
return RED
elif val < 80:
return ORANGE
return GREEN

def text_transform_bg(val):
return str(val) + ' mg/dl'

def text_transform_direction(val):
if val == "Flat":
return "→"
if val == "SingleUp":
return "↑"
if val == "DoubleUp":
return "↑↑"
if val == "DoubleDown":
return "↓↓"
if val == "SingleDown":
return "↓"
if val == "FortyFiveDown":
return "→↓"
if val == "FortyFiveUp":
return "→↑"
return val

# the current working directory (where this file is)
cwd = ("/"+__file__).rsplit('/', 1)[0]
pyportal = PyPortal(url=DATA_SOURCE,
json_path=(BG_VALUE, BG_DIRECTION),
status_neopixel=board.NEOPIXEL,
default_bg=0xFFFFFF,
text_font=cwd+"/fonts/Arial-Bold-24-Complete.bdf",
text_position=((90, 120), # VALUE location
(140, 160)), # DIRECTION location
text_color=(0x000000, # sugar text color
0x000000), # direction text color
text_wrap=(35, # characters to wrap for sugar
0), # no wrap for direction
text_maxlen=(180, 30), # max text size for sugar & direction
text_transform=(text_transform_bg,text_transform_direction),
)

# speed up projects with lots of text by preloading the font!
pyportal.preload_font(b'mg/dl012345789');
pyportal.preload_font((0x2191, 0x2192, 0x2193))
#pyportal.preload_font()

while True:
try:
value = pyportal.fetch()
pyportal.set_background(get_bg_color(value[0]))
print("Response is", value)
except RuntimeError as e:
print("Some error occured, retrying! -", e)
time.sleep(180)

I've put the code up at https://github.com/shanselman/NightscoutPyPortal. I want to get (make a custom?) a larger BDF (Bitmap Font) that is about twice the size AND includes 45 degree arrows ↗ and ↘ as the font I have is just 24 point and only includes arrows at 90 degrees. Still, great fun and took just an hour!

NOTE: I used the Chortkeh BDF Font viewer to look at the Bitmap Fonts on Windows. I still need to find a larger 48+ PT Arial.

What information would YOU display on a PyPortal?

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.
© 2018 Scott Hanselman. All rights reserved.
     

F7 is the greatest PowerShell hotkey that no one uses any more. We must fix this.

Mar 26, 2019

Description:

Thousands of years ago your ancestors, and myself, were using DOS (or CMD) pressing F7 to get this amazing little ASCII box to pop up to pick commands they'd typed before.

Holy crap it's a little ASCII box

When I find myself in cmd.exe I use F7 a lot. Yes, I also speak *nix and Yes, Ctrl-R is amazing and lovely and you're awesome for knowing it and Yes, it works in PowerShell.

Ctrl-R for history works in PowerShell

Here's the tragedy. Ctrl-R for a reverse command search works in PowerShell because of a module called PSReadLine. PSReadLine is basically a part of PowerShell now and does dozens of countless little command line editing improvements. It also - not sure why and I'm still learning - unknowingly blocks the glorious F7 hotkey.

If you remove PSReadLine (you can do this safely, it'll just apply to the current session)

Remove-Module -Name PSReadLine

Why, then you get F7 history with a magical ASCII box back in PowerShell. And as we all know, 4k 3D VR be damned, impress me with ASCII if you want a developer's heart.

There is a StackOverflow Answer with a little PowerShell snippet that will popup - wait for it - a graphical list with your command history by calling

Set-PSReadlineKeyHandler -Key F7

And basically rebinding the PSReadlineKeyHandler for F7. PSReadline is brilliant, but I what I really want to do is to tell it to "chill" on F7. I don't want to bind or unbind F7 (it's not bound by default) I just want it passed through.

Until that day, I, and you, can just press Ctrl-R for our reverse history search, or get this sad shadow of an ASCII box by pressing "h." Yes, h is already aliased on your machine to Get-History.

PS C:\Users\scott> h

  Id CommandLine
   -- -----------
    1 dir
    2 Remove-Module -Name PSReadLine

Then you can even type "r 1" to "invoke-history" on item 1.

But I will still mourn my lovely ASCII (High ASCII? ANSI? VT100?) history box.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Getting Started with .NET Core and Docker and the Microsoft Container Registry

Mar 22, 2019

Description:

It's super easy to get started with .NET Core and/or ASP.NET Core with Docker. If you have Docker installed you don't need to install anything to try out .NET Core, of course.

To run a little .NET Core console app:

docker run --rm mcr.microsoft.com/dotnet/core/samples:dotnetapp

And the result:

latest: Pulling from dotnet/core/samples
Hello from .NET Core!
...SNIP...

**Environment**
Platform: .NET Core
OS: Linux 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018

To run a quick little ASP.NET Core website just:

docker run -it --rm -p 8000:80 --name aspnetcore_sample mcr.microsoft.com/dotnet/core/samples:aspnetapp

And here it is running on localhost:8000

Simple ASP.NET Core app under Docker

You can also host ASP.NET Core Images with Docker over HTTPS to with this image, or run ASP.NET Core apps in Windows Containers.

Note that Microsoft teams are now publishing container images to the MCR (Microsoft Container Registry) so they can use the Azure CDN and pull faster when they are closer to you globally. The images start at MCR and then can be syndicated to other container registries.

The new repos follow: .NET Core Runtime dependencies (just the stuff .NET Core needs, but not .NET Core itself - useful if you want to distribute your own copy and still want a small container image size) .NET Core Runtime (Just what's needed to run a .NET Core app) .NET Core SDK (includes the compilers, everything) ASP.NET Core runtime (everything you need to RUN your ASP.NET Core web app)

When you "docker pull" you can use tag strings for .NET Core and it works across any supported .NET Core version

SDK: docker pull mcr.microsoft.com/dotnet/core/sdk:2.1 ASP.NET Core Runtime: docker pull mcr.microsoft.com/dotnet/core/aspnet:2.1 .NET Core Runtime: docker pull mcr.microsoft.com/dotnet/core/runtime:2.1 .NET Core Runtime Dependencies: docker pull mcr.microsoft.com/dotnet/core/runtime-deps:2.1

For example, I can run the .NET Core 3.0 SDK and mess around with it like this:

docker run -it mcr.microsoft.com/dotnet/core/sdk:3.0

I've been using Docker to run my unit tests on my podcast site within a container locally. Then I volume mount and dump the test results out in a local folder and inspect them with Visual Studio

docker build --pull --target testrunner -t podcast:test .
docker run --rm -v c:\github\hanselminutes-core\TestResults:/app/hanselminutes.core.tests/TestResults podcast:test

I can then either host the Docker container in Azure App Service for Containers, or as little one-off per-second billed instances with Azure Container Instances (ACI).

Have you been using .NET Core in Docker? How has it been going for you?

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

What is Blazor and what is Razor Components?

Mar 19, 2019

Description:

I've blogged a little about Blazor, showing examples like Compiling C# to WASM with Mono and Blazor then Debugging .NET Source with Remote Debugging in Chrome DevTools as well as very early on asking questions like .NET and WebAssembly - Is this the future of the front-end?

Let's back up and level-set.

What is Blazor?

Blazor is a single-page app framework for building interactive client-side Web apps with .NET. Blazor uses open web standards without plugins or code transpilation. Blazor works in all modern web browsers, including mobile browsers.

You write C# in case of JavaScript, and you can use most of the .NET ecosystem of open source libraries. For the most part, if it's .NET Standard, it'll run in the browser. (Of course if you called a Windows API or a Linux specific API and it didn't exist in the client-side browser S world, it's not gonna work, but you get the idea).

The .NET code runs inside the context of WebAssembly. You're running "a .NET" inside your browser on the client-side with no plugins, no Silverlight, Java, Flash, just open web standards.

WebAssembly is a compact bytecode format optimized for fast download and maximum execution speed.

Here's a great diagram from the Blazor docs.

Blazor runs inside your browser, no plugins needed

Here's where it could get a little confusing. Blazor is the client-side hosting model for Razor Components. I can write Razor Components. I can host them on the server or host them on the client with Blazor.

You may have written Razor in the past in .cshtml files, or more recently in .razor files. You can create and share components using Razor - which is a mix of standard C# and standard HTML, and you can host these Razor Components on either the client or the server.

In this diagram from the docs you can see that the Razor Components are running on the Server and SignalR (over Web Sockets, etc) is remoting them and updating the DOM on the client. This doesn't require Web Assembly on the client, the .NET code runs in the .NET Core CLR (Common Language Runtime) and has full compatibility - you can do anything you'd like as you are not longer limited by the browser's sandbox.

Here's Razor Components running on the server

Per the docs:

Razor Components decouples component rendering logic from how UI updates are applied. ASP.NET Core Razor Components in .NET Core 3.0 adds support for hosting Razor Components on the server in an ASP.NET Core app. UI updates are handled over a SignalR connection.

Here's the canonical "click a button update some HTML" example.

@page "/counter"

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" onclick="@IncrementCount">Click me</button>

@functions {
int currentCount = 0;

void IncrementCount()
{
currentCount++;
}
}

You can see this running entirely in the browser, with the C# .NET code running on the client side. .NET DLLs (assemblies) are downloaded and executed by the CLR that's been compiled into WASM and running entirely in the context of the browser.

Note also that I'm stopped at a BREAKPOINT in C# code, except the code is running in the browser and mapped back into JS/WASM world.

Debugging Razor Components on the Client Side

But if I host my app on the server as hosted Razor Components, the C# code runs entirely on the Server-side and the client-side DOM is updated over a SignalR link. Here I've clicked the button on the client side and hit the breakpoint on the server-side in Visual Studio. No there's no POST and no POST-back. This isn't WebForms - It's Razor Components. It's a SPA app written in C#, not JavaScript, and I can change the locations of the running logic, while the UI remains always standard HTML and CSS.

Debugging Razor Components on the Server Side

Looking at how Razor Components and now Phoenix LiveView are offering a new way to manage JavaScript-free stateful server-rendered apps has me realizing it’s the best parts of WebForms where the postback is now a persistent websockets tunnel to the backend and only diffs are sent

— Scott Hanselman (@shanselman) March 16, 2019

It's a pretty exciting time on the open web. There's a lot of great work happening in this space and I'm very interesting to see how frameworks like Razor Components/Blazor and Phoenix LiveView change (or don't) how we write apps for the web.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Xbox Avatar accessories for People with Diabetes! Sponsored by Nightscout and Konsole Kingz

Mar 14, 2019

Description:

My Xbox user name is Glucose for a reason.

This is a passion project of mine. You've likely seen me blog about diabetes for many many years. You may have enjoyed my diabetes hacks like lighting up my keyboard keys to show me my blood sugar, or some of the early work Ben West and I did to bridge Dexcom's cloud with the NightScout open source diabetes management system.

Recently Xbox announced new avatars! They look amazing and the launch was great. They now have avatars in wheelchairs, ones with artificial limbs, and a wide variety of hair and skin tones. This is fantastic as it allows kids (and adults!) to be seen and be represented in their medium of choice, video games.

I was stoked and immediately searched the store for "diabetes." No results. No pumps, sensors, emotes, needles, nothing. So I decided to fix it.

NOW AVAILABLE: Go and buy the Nightscout Diabetes CGM avatar on the Xbox Store now!

I called two friends - my friends at the Nightscout Foundation, dedicated to open source and open data for people with diabetes, as well as my friends at Konsole Kingz, digital avatar creators extraordinaire with over 200 items in the Xbox store from kicks to jerseys and tattoos.

And we did it! We've added our first diabetes avatar top with some clever coding from Konsole Kingz, it is categorized as a top but gives your avatar not only a Nightscout T-Shirt with your choice of colors, but also a CGM (Continuous Glucose Meter) on your arm!

Miss USA has a CGMFor most diabetics, CGMs are the hardware implants we put in weekly to tell us our blood sugar with minimal finger sticks. They are the most outwardly obvious physical manifestation of our diabetes and we're constantly asked about them. In 2017, Miss USA contestant Krista Ferguson made news by showing her CGM rather than hiding it. This kind of visible representation matters to kids with diabetes - it tells them (and us) that we're OK.

You can find the Nightscout CGM accessory in a nuimber of ways. You can get it online at the Xbox Avatar shop, and when you've bought it, it'll be in the Purchased Tab of the Xbox Avatar Editor, under Closet | Tops.

You can even edit your Xbox Avatar on Windows 10 without an Xbox! Go pick up the Xbox Avatar Editor and install it (on both your PC and Xbox if you like) and you can experiment with shirt and logo color as well.

Consider this a beta release. We are working on improving resolution and quality, but what we really what to know is this - Do you want more Diabetes Xbox Avatar accessories? Insulin pumps on your belt? An emote to check your blood sugar with a finger stick?

Diabetes CGM on an Xbox avatar

If this idea is a good one and is as special to you and your family (and the gamers in your life with diabetes) please SHARE it. Share it on social media, tell your friends at the news. Profits from this avatar item will go to the Nightscout Foundation!

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.
© 2018 Scott Hanselman. All rights reserved.
     

How to stream PC games from Windows 10 to your Xbox One for free

Mar 12, 2019

Description:

Xbox is ready for you to connect to wirelesslyI've been really enjoying my Xbox lately (when the family is asleep) as well as some fun Retrogaming on original consoles. Back in 2015 I showed how you can stream from your Xbox to any PC using the Xbox app from the Windows Store. You can pair your Xbox controller with any PC you've got around (either with the $20 Xbox Wireless Adapter or just with a micro-USB cable you likely have already). In fact, I often walk on the treadmill while streaming games from the Xbox to my little Surface Pro 3.

Then, a year later I did the inverse. I played PC games on my big screen using a SteamLink! Although they've been discontinued, they are out there and they work great. This little box lets you play PC games remotely on your large screens. I have a big PC in my office and I wanted to use the big TV in the living room. The game still runs on the PC but the video/audio and controls are all remoted to the Xbox. Plus, SteamLink only works with the Steam app running and is optimized for Steam games. It's a single task box and one more thing to plug into HDMI but it works well.

Fast-forward to today and I learned that Windows 10 can project its screen to an Xbox One AND you can use your Xbox One controller to control it (it's paired on the Xbox side) and play games or run apps. No extra equipment needed.

I installed the Xbox Wireless Display App on my Xbox One. Then on my PC, here's what I see upon pressing Win+P and clicking "Connect to Wireless Display."

Connected to Xbox One

Once I've duplicated, you can see here I'm writing this blog post wirelessly projected to the Xbox. It just worked. Took 5 min to do this.

If you're tech savvy, you may say, isn't this "just Miracast" and "hasn't this always been possible?" Yes and no. What's been updated is the Xbox Wireless Display App that you'll want to install and run on your Xbox. You may have been able to project your PC screen to various sticks and Miracast adapters, but this free app makes your Xbox a receiver for Miracast broadcasts (over wifi or LAN) and most importantly - now you can use your Xbox controller already paired to the Xbox to control the remote PC. You can use that control to play games or switch to mouse control mode with Start+Select and mouse around with your Xbox thumbsticks!

20190315_050720572_iOS

If I hit the menu button I can see how the controllers map to PC controls. No remote keyboard and mouse connected from the Xbox...yet. (and to be clear, no word if that will ever be supported but it'd be cool!)

Controller Mapping for PC to Xbox

To make sure you can do this, run DxDiag and save all information into "DxDiag.txt." Here's part of mine. There's nothing special about my machine. It's worth pointing out I have no Wifi adapter on this machine and it has an NVidia 1080 video card. Miracast is happening over the Wired LAN (local area network) in my house. This is Miracast over Infrastructure and it's in Windows 10 since version 1703 (March 2017).

------------------
System Information
------------------
Machine name: IRONHEART
Operating System: Windows 10 Pro 64-bit
Processor: Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz (20 CPUs), ~3.3GHz
Memory: 32768MB RAM
DirectX Version: DirectX 12
User DPI Setting: 144 DPI (150 percent)
System DPI Setting: 144 DPI (150 percent)
Miracast: Available, with HDCP

When you've connected your PC to my Xbox and are streaming FROM my PC to your Xbox, you'll see this bar at the top of the PC side. There's three optimization settings for Gaming, Working, and Watching Videos. I assume these are balancing crispness/quality with framerate and latency changes.

Gaming, Working, Watching Videos

Now let's take it to the next level. I can run Steam Big Picture and here I am running Batman: Arkham Origins on my PC, but played on and controlled from my Xbox in the other room!

Ok this is amazing. You run “wireless display” on your Xbox. Then on your PC, just WinKey+P on your PC and connect to wireless display. This is Batman on my Xbox...RUNNING ON MY PC pic.twitter.com/pyxmHLe3fz

— Scott Hanselman (@shanselman) March 14, 2019

I like that I don't need the SteamLink. I find that this runs more reliably and more easily than my original set up. I like that I can switch the Xbox controller from controller mode to mouse mode. And most of all I like that this doesn't require any custom setup, extra work, or drivers. It just worked out of the box for me.

Your mileage may vary and I'm trying to figuire out why some people's video card drivers don't allow this and then end up with no "Connect to a Wireless Display" option in their Win+P menu. If you figure it out, please sound of in the comments.

Give it a try! I hope you enjoy it. I'm having a blast.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

How to parse string dates with a two digit year and split on the right century in C#

Mar 7, 2019

Description:

So you've been asked to parse some dates, except the years are two digit years. For example, dates like "12 Jun 30" are ambiguous...or are they?

If "12 Jun 30" is intended to express a birthday, given it's 2019 as the of writing of this post, we can assume it means 1930. But if the input is "12 Jun 18" is that last year, or is that a 101 year old person's birthday?

Enter the Calendar.TwoDigitYearMax property.

For example, if this property is set to 2029, the 100-year range is from 1930 to 2029. Therefore, a 2-digit value of 30 is interpreted as 1930, while a 2-digit value of 29 is interpreted as 2029.

The initial value for this property comes out of the DEPTHS of the region and languages portion of the Control Panel. Note way down there in "additional date, time, & regional settings" in the "more settings" and "date" tab, there's a setting that (currently) splits on 1950 and 2049.

Two Digit Year regional settings

If you're writing a server-side app that parses two digit dates you'll want to be conscious and explicit about what behavior you WANT so that you're not surprised.

Setting TwoDigitYearMax sets a 100 year RANGE that your two digit years will be interpreted to be within. You can also just change it on the current thread's current culture's calendar. It's up to you.

For example, this little app:

string dateString = "12 Jun 30"; //from user input
DateTime result;
CultureInfo culture = new CultureInfo("en-US");
DateTime.TryParse(dateString, culture, DateTimeStyles.None, out result);
Console.WriteLine(result.ToLongDateString());

culture.Calendar.TwoDigitYearMax = 2099;

DateTime.TryParse(dateString, culture, DateTimeStyles.None, out result);
Console.WriteLine(result.ToLongDateString());

gives this output:

Thursday, June 12, 1930
Wednesday, June 12, 2030

Note that I've changed TwoDigitYearMax from and moved it up to the 1999-2099 range so "30" is assumed to be 2030, within that 100 year range.

Hope this helps!

Sponsor: Stop wasting time trying to track down the cause of bugs. Sentry.io provides full stack error tracking that lets you monitor and fix problems in real time. If you can program it, we can make it far easier to fix any errors you encounter with it.


© 2018 Scott Hanselman. All rights reserved.
     

Converting an Excel Worksheet into a JSON document with C# and .NET Core and ExcelDataReader

Mar 6, 2019

Description:

Excel isn't a database, except when it isI've been working on a little idea where I'd have an app (maybe a mobile app with Xamarin or maybe a SPA, I haven't decided yet) for the easily accessing and searching across the 500+ videos from https://azure.microsoft.com/en-us/resources/videos/azure-friday/

HOWEVER. I don't have access to the database that hosts the metadata and while I'm trying to get at least read-only access to it (long story) the best I can do is a giant Excel spreadsheet dump that I was given that has all the video details.

This, of course, is sub-optimal, but regardless of how you feel about it, it's a database. Or, a data source at the very least! Additionally, since it was always going to end up as JSON in a cached in-memory database regardless, it doesn't matter much to me.

In real-world business scenarios, sometimes the authoritative source is an Excel sheet, sometimes it's a SQL database, and sometimes it's a flat file. Who knows?

What's most important (after clean data) is that the process one builds around that authoritative source is reliable and repeatable. For example, if I want to build a little app or one page website, yes, ideally I'd have a direct connection to the SQL back end. Other alternative sources could be a JSON file sitting on a simple storage endpoint accessible with a single HTTP GET. If the Excel sheet is on OneDrive/SharePoint/DropBox/whatever, I could have a small serverless function run when the files changes (or on a daily schedule) that would convert the Excel sheet into a JSON file and drop that file onto storage. Hopefully you get the idea. The goal here is clean, reliable pragmatism. I'll deal with the larger business process issue and/or system architecture and/or permissions issue later. For now the "interface" for my app is JSON.

So I need some JSON and I have this Excel sheet.

Turns out there's a lovely open source project and NuGet package called ExcelDataReader. There's been ways to get data out of Excel for decades. Literally decades. One of my first jobs was automating Microsoft Excel with Visual Basic 3.0 with COM Automation. I even blogged about getting data out of Excel into ASP.NET 16 years ago!

Today I'll use ExcelDataReader. It's really nice and it took less than an hour to get exactly what I wanted. I haven't gone and made it super clean and generic, refactored out a bunch of helper functions, so I'm interested in your thoughts. After I get this tight and reliable I'll drop it into an Azure Function and then focus on getting the JSON directly from the source.

A few gotchas that surprised me. I got a "System.NotSupportedException: No data is available for encoding 1252." Windows-1252 or CP-1252 (code page) is an old school text encoding (it's effectively ISO 8859-1). Turns out newer .NETs like .NET Core need the System.Text.Encoding.CodePages package as well as a call to System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance); to set it up for success. Also, that extra call to reader.Read at the start to skip over the Title row had me pause a moment.

using System;
using System.IO;
using ExcelDataReader;
using System.Text;
using Newtonsoft.Json;

namespace AzureFridayToJson
{
class Program
{
static void Main(string[] args)
{
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);

var inFilePath = args[0];
var outFilePath = args[1];

using (var inFile = File.Open(inFilePath, FileMode.Open, FileAccess.Read))
using (var outFile = File.CreateText(outFilePath))
{
using (var reader = ExcelReaderFactory.CreateReader(inFile, new ExcelReaderConfiguration()
{ FallbackEncoding = Encoding.GetEncoding(1252) }))
using (var writer = new JsonTextWriter(outFile))
{
writer.Formatting = Formatting.Indented; //I likes it tidy
writer.WriteStartArray();
reader.Read(); //SKIP FIRST ROW, it's TITLES.
do
{
while (reader.Read())
{
//peek ahead? Bail before we start anything so we don't get an empty object
var status = reader.GetString(0);
if (string.IsNullOrEmpty(status)) break;

writer.WriteStartObject();
writer.WritePropertyName("Status");
writer.WriteValue(status);

writer.WritePropertyName("Title");
writer.WriteValue(reader.GetString(1));

writer.WritePropertyName("Host");
writer.WriteValue(reader.GetString(6));

writer.WritePropertyName("Guest");
writer.WriteValue(reader.GetString(7));

writer.WritePropertyName("Episode");
writer.WriteValue(Convert.ToInt32(reader.GetDouble(2)));

writer.WritePropertyName("Live");
writer.WriteValue(reader.GetDateTime(5));

writer.WritePropertyName("Url");
writer.WriteValue(reader.GetString(11));

writer.WritePropertyName("EmbedUrl");
writer.WriteValue($"{reader.GetString(11)}player");
/*
<iframe src="https://channel9.msdn.com/Shows/Azure-Friday/Erich-Gamma-introduces-us-to-Visual-Studio-Online-integrated-with-the-Windows-Azure-Portal-Part-1/player" width="960" height="540" allowFullScreen frameBorder="0"></iframe>
*/

writer.WriteEndObject();
}
} while (reader.NextResult());
writer.WriteEndArray();
}
}
}
}
}

The first pass is on GitHub at https://github.com/shanselman/AzureFridayToJson and the resulting JSON looks like this:

[
{
"Status": "Live",
"Title": "Introduction to Azure Integration Service Environment for Logic Apps",
"Host": "Scott Hanselman",
"Guest": "Kevin Lam",
"Episode": 528,
"Live": "2019-02-26T00:00:00",
"Url": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-introduction-to-azure-integration-service-environment-for-logic-apps",
"embedUrl": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-introduction-to-azure-integration-service-environment-for-logic-appsplayer"
},
{
"Status": "Live",
"Title": "An overview of Azure Integration Services",
"Host": "Lara Rubbelke",
"Guest": "Matthew Farmer",
"Episode": 527,
"Live": "2019-02-22T00:00:00",
"Url": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-an-overview-of-azure-integration-services",
"embedUrl": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-an-overview-of-azure-integration-servicesplayer"
},
...SNIP...

Thoughts? There's a dozen ways to have done this. How would you do this? Dump it into a DataSet and serialize objects to JSON, make an array and do the same, automate Excel itself (please don't do this), and on and on.

Certainly this would be easier if I could get a CSV file or something from the business person, but the issue is that I'm regularly getting new drops of this same sheet with new records added. Getting the suit to Save As | CSV reliably and regularly isn't sustainable.

Sponsor: Stop wasting time trying to track down the cause of bugs. Sentry.io provides full stack error tracking that lets you monitor and fix problems in real time. If you can program it, we can make it far easier to fix any errors you encounter with it.


© 2018 Scott Hanselman. All rights reserved.
     

EditorConfig code formatting from the command line with .NET Core's dotnet format global tool

Mar 1, 2019

Description:

"EditorConfig helps maintain consistent coding styles for multiple developers working on the same project across various editors and IDEs." Rather than you having to keep your code in whatever format the team has agreed on, you can check in an .editorconfig file and your editor of choice will keep things in line.

If you're a .NET developer like myself, there's a ton of great .NET EditorConfig options you can set to ensure the team uses consistent Language Conventions, Naming Conventions, and Formatting Rules.

Language Conventions are rules pertaining to the C# or Visual Basic language, for example, var/explicit type, use expression-bodied member. Formatting Rules are rules regarding the layout and structure of your code in order to make it easier to read, for example, Allman braces, spaces in control blocks. Naming Conventions are rules respecting the way objects are named, for example, async methods must end in "Async".

If you're using Visual Studios 2010, 2012, 2013, or 2015, fear not. There's at least a basic EditorConfig free extension for you that enforces the basic rules. There is also an extension for Visual Studio Code to support EditorConfig files that takes just seconds to install.

ASIDE: If you are looking for a decent default for C#, take a look at the .editorconfig that the C# Roslyn compiler team uses. I don't know about you, but my brain exploded when I saw that they used spaces vs tabs.

But! What if you want this coding formatting goodness at the dotnet command line? You can use "dotnet format" as a global tool! It's one line to install, then it's available everywhere for all your .NET Core apps.

D:\github\hanselminutes-core> dotnet tool install -g dotnet-format
You can invoke the tool using the following command: dotnet-format
Tool 'dotnet-format' (version '3.0.2') was successfully installed.
D:\github\hanselminutes-core> dotnet format
Formatting code files in workspace 'D:\github\hanselminutes-core\hanselminutes-core.sln'.
Found project reference without a matching metadata reference: D:\github\hanselminutes-core\hanselminutes.core\hanselminutes-core.csproj
Formatting code files in project 'hanselminutes-core'.
Formatting code files in project 'hanselminutes.core.tests'.
Format complete.

You can see in the screenshot below where dotnet format used its scandalous defaults to move my end of line { to its own line! Well, if that's what the team wants! ;)

My code is automatically formatted by the dotnet format tool

Of course, dotnet format is all open source and up at https://github.com/dotnet/format. You can install the stable build OR a development build from myGet.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.
© 2018 Scott Hanselman. All rights reserved.
     

Hey Siri, what's my blood sugar? Learning to Code with Apple's iPhone Shortcuts

Feb 27, 2019

Description:

BA library of dozens of shortcuts on iOSear with me here. Apple Shortcuts (free on the App Store) is extraordinary and you shouldn't sleep on it. In fact, you should use it and explore it as it's amazing. I would go even further and say it could be a great place to learn to code!

Apple Shortcuts on iPhone is a lot like Microsoft Flow, except for your phone. Shortcuts let you string together Actions (ahem, functions) into multi-step tasks (ahem, functions that call functions). There's a rich and growing gallery of shortcuts that you can copy into your local (to your phone) library. You can then name them and invoke your Shortcuts with Siri.

Here's a few links to Shortcuts that (assuming you are reading this from your iPhone) you can add to your library with a click!

Do Not Disturb Timer Expand URL Intelligent Power (from Reddit) Check Spelling Convert Video to GIF Download YouTube Video

Once you have a shortcut you can invoke it as an item/icon on your springboard/home screen, you can have Siri run it with your voice, or invoke it via a "share sheet" that is available in all apps.

It would be reasonable to think this was a simple macro system with a few basic building blocks, but I don't think Apple's team gets enough credit. This is a complete development environment on your phone.

For example, here's a incredibly intricate and powerful Shortcut if one is pulled over by the police.

It pauses any music that may be playing, turns down your brightness and volume, turns on Do Not Disturb, and sends a message to the contact of your choosing letting them know you’re being pulled over and what your current location is. It then opens your front camera and starts a video recording so you have a video record of being pulled over.

Once you stop the recording it sends a copy of the video to a contact you specify, puts volume and brightness back to where they were, turns off Do Not Disturb, and gives you the option to send to iCloud Drive or Dropbox!

You could then record a Siri shortcut and just say "Hey Siri, I'm being pulled over" and all this happens automatically, hands free.

Take a look at the Laundry Timer app here. It's a very classic "take input and do a thing" program. You can build and extend workflows like this and the data from one flows through to the next one.

A multiple step shortcut with many actions that flow data into the next, organized in a pipeline

Note the Shortcut above. The "Adjust Date" action pops up a Date and is used as a Diff(erence) against the "Current Date" action, then used again in the Add New Reminder as an input to "Add New Reminder." These contextual variables flow through and are easily accessible in this genius UI. It really is near-perfect. Try it.

At this point you may be thinking, um, OK, that's cute, but where's the learn to code revolution here? It's not that open-ended of a system, what can I really do?

Like many connected cars, my car has a kind of REST API that its app uses to do things like heat up the climate system. Here I can literally POST (like Curl, but on your iPhone!) to an endpoint and pass in a FORM and parse the resulting JSON. Wow! Drink that in. You can write complex functions with iOS Shortcuts. Really.

calling a REST API with an iOS shortcut

Hang on. My body has a REST API. I use the open source Nightscout project to create a REST API on top of my Diabetes Continuous Glucose Meter then surface it in places like my lighted keyboard or even my Git Prompt.

How hard would it be to - right now as I make this blog post - write a method to have Siri retrieve my blood sugar and announce it to me when I say "Siri what's my blood sugar?" Let's see!

I make a URL object with my REST API that returns my sugar as JSON, it gets passed into Get Contents of URL. That makes a Dictionary from the Input, then gets the value of "sgv" (serum glucose value) and then the result of that is used to make a string with the Text action.

Preparing to make a shortcut

Now I have Siri SAY it. I can "debug" by running the Shortcut with the play button.

Building a shortcut

Then I can Add it to Siri and record my phrase. Here's me saying "what's my blood sugar" and she's telling me. Yes, I know. I had a cookie. I deserved it.

Running your shortcut

This is just the start. It could also tell me my trend lines, text someone if it's high, make a chart, I figure can do anything! I'm going to continue to explore Shortcuts but this little NightScout one can be downloaded to YOUR phone here. You'll only need to put in YOUR own URL for your Nightscout instance.

Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


© 2018 Scott Hanselman. All rights reserved.
     

Learning about .NET Core futures by poking around at David Fowler's GitHub

Feb 22, 2019

Description:

A picture of David silently judging my code, but with loveDavid Fowler is the ASP.NET Core Architect (and an amazing highly technical public speaker) and I've learned a lot from watching him code. However, what's the best way for YOU to learn from folks like David if you can't sit on their shoulder? Why, look at their GitHub!

Since .NET Core (and most of Microsoft) is not only open source but also developed in the open now on GitHub, we can actually watch folks in their day to day work as they commit code to projects like the C# compiler, .NET Core, and ASP.NET Core.

Even more interestingly, we can look at David's github here https://github.com/davidfowl and then under Repositories see what he's up to, filter by language and type, and explore! Sometimes I just explore the Pull Requests on projects like ASP.NET Core.

You can have Private repositories on GitHub, as I do, and as I'm sure David does. But GitHub is a social network for code and it's more fun and a better learning experience when we can see each others code and read it. Read with a critical eye, but without judgment as you may not have all the context that the author does. If you went to my GitHub, https://github.com/shanselman you might be disappointed but you also may be missing the big picture. Just consider that as you Follow people and explore their code.

David is an advanced .NET developer, while, for example, I am comparatively intermediate. So I realize that not all of David's code is FOR me. It's a scratchpad, it's not educational how-to workshops. However, I can get pick up cool idioms, interesting directions the tech may be going, and more importantly - prototypes and spikes. Spikes are folks testing out technical ideas. They may not be complete. In fact, they may never be complete. But some my be harbingers of things to come.

Here's a few things I learned today.

gRPC for .NET Core

For example, at https://github.com/davidfowl/grpc-dotnet I can see David has forked (copied) gRPC for dotnet and his game is working with the gRPC folks to make a fully supported version of gRPC for production workloads with .NET Core! Here are the stated goals:

We plan to implement a fully-managed version of gRPC for .NET that will be built on top of ASP.NET Core HTTP/2 server. Good integration with the rest of ASP.NET Core ecosystem High-performance (we plan to utilize some of the cutting edge performance features from ASP.NET Core and in .NET plaform itself)

That sounds cool! I can go learn that gRPC is a modern (google sponsored) Remote Procedure Call framework that can run anywhere. It's used by Netflix and Square and supports basically any languaige and any environment. Nice for this microservice world we are entering and hopefully has learned from the sins of DCOM and CORBA and RMI, because I was there and it sucked.

Nothing to see here but moving to a new JSON serializer

This Web.Framework sounds fun, and I'll be sure to take the description to heart.

says "Lame name, just a prototype, nothing to see here (move along)"

You can see David and James Newton-King kicking ideas around as you explore the commit log. However, the most interesting commit IMHO is when David moves this little spike from using JSON.NET (the ubiquitous 3rd party JSON serializer) to the new emerging official System.Text.Json. Here is the commit with unified differences.

It's a small change but it also makes me feel good about the API underneath this new JSON API that's coming. My takeway is that it's not as scary as I'd assumed. Looks like a Good Thing(tm).

A diff of code shows that just one line is changed to move JSON serializers

 

Cool!

Multi-Protocol ASP.NET Core

This looks interesting.

"The following sample shows how you can host a TCP server and HTTP server in the same ASP.NET Core application. Under the covers, it's the same server (Kestrel) running different protocols on different ports. The ConnectionHandler is a new primitive introduced in ASP.NET Core 2.1 to support non-HTTP protocols."

I didn't know you could do that! Looks like this sample hasn't changed much since it was conceived of in 2018, but then in the last month it's been updated twice and it appears to be part of a larger, slow-moving architectural issue called Bedrock that's moving forward.

I learned that Kestral (the ASP.NET Core web server) has a "ListenLocalhost" option on its options object!

WebHost.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
// This shows how a custom framework could plug in an experience without using Kestrel APIs directly
services.AddFramework(new IPEndPoint(IPAddress.Loopback, 8009));
})
.UseKestrel(options =>
{
// TCP 8007
options.ListenLocalhost(8007, builder =>
{
builder.UseConnectionHandler<MyEchoConnectionHandler>();
});

// HTTP 5000
options.ListenLocalhost(5000);

// HTTPS 5001
options.ListenLocalhost(5001, builder =>
{
builder.UseHttps();
});
})
.UseStartup<Startup>();

I can see here that TCP port 8007 is customer and uses a custom ConnectionHandler which I also didn't know existed! I can then look at the implementation of that handler and it's cool how clean the API is. You can get the result cleanly off the Transport buffer. You're doing low-level TCP but it doesn't feel low level.

using System.Threading.Tasks;
using Microsoft.AspNetCore.Connections;
using Microsoft.Extensions.Logging;

namespace KestrelTcpDemo
{
public class MyEchoConnectionHandler : ConnectionHandler
{
private readonly ILogger<MyEchoConnectionHandler> _logger;

public MyEchoConnectionHandler(ILogger<MyEchoConnectionHandler> logger)
{
_logger = logger;
}

public override async Task OnConnectedAsync(ConnectionContext connection)
{
_logger.LogInformation(connection.ConnectionId + " connected");

while (true)
{
var result = await connection.Transport.Input.ReadAsync();
var buffer = result.Buffer;

foreach (var segment in buffer)
{
await connection.Transport.Output.WriteAsync(segment);
}

if (result.IsCompleted)
{
break;
}

connection.Transport.Input.AdvanceTo(buffer.End);
}

_logger.LogInformation(connection.ConnectionId + " disconnected");
}
}
}

Pretty slick. This just echos what is sent to that port but not only has it educated me about a thing I didn't know about, it's something I can mentally file away until I need it!

All of these things I learned in just 30 minutes of exploring someone's public repository.

What kinds of code do you like to read and what have you learned from just poking around?

Sponsor: Get the latest JetBrains Rider for remote debugging via SSH, SQL injections, a new Search Everywhere popup, and improved Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Right click publish quickly to Azure App Services with VS Code extensions and zipdeploy

Feb 20, 2019

Description:

I wanted to see what was the fastest way to get an ASP.NET Core web site up (for free) on Azure. First, I could use Visual Studio Community (which is free), and just right click Publish, sign into Azure, and make a free Web App and I'm cool. But I also wanted to see what it was like on Visual Studio Code (which would work on Linux, etc)

I downloaded these things. This is 10-15 min tops for download AND install. Likely less on a fast connection.

VS Code https://code.visualstudio.com .NET Core https://dotnet.microsoft.com Azure App Service extension for VS Code C# extension for VS Code (NOTE: this will get automatically recommended to you anywhere when you open a .cs file for the first time.

This also assumes you have a free Azure account https://azure.microsoft.com/free/

I made a new ASP.NET web site with "dotnet new razor" at the command line. The Azure App Service extension makes a new Azure icon appear on the left of VS Code. I can see my subscription(s) and any sites I've made before. I can right-click the top of the tree or just click the + plus sign.

image

TRICK: The default mode of the Azure App Service extension is "basic" mode. This is fine for messing around, but it will assume a bunch of things. You don't have control over the location (it'll pick a nearby one) or really anything. Again, it's fine. However, if you DO want explicit prompts for name, location, OS, runtime, etc you can turn on "appService.advanced" in File | Preferences | Settings (or Ctrl+,). Don't feel you need to, but know it's possible.

appService.advanced

Now, in my opinion, deploying apps (.NET Core, Node, or otherwise) directly from source can be a little confusing, and it doesn't really scale for anything other than proofs of concept. There's usually a "build" step, and ideally you'll have a CI/CD (Continuous Integration/Continuous Deployment) pipeline for anything of any real size. It's easier than you think - you can likely get a basic DevOps pipeline up in a hour or so. I commit to GitHub and it just deploys to Azure.

That said, a quickie deploy has value so I wanted to do it. You can do a "git deploy" to Azure where Azure is a git remote and you just "git push azure master" but...you're pushing source and Azure "builds" it in Azure App Service using a thing called Kudu. That means it'll run npm install, dotnet restore, etc, it'll take some time. You could deploy a container to Azure and just push it to a Container Registry, then spin up the container.

However, Azure also has a little known but rather clever "zipdeploy" feature. Once you've configured your Azure App Service for zipdeployment as a source, you can just POST a new ZIP deployment with Curl!

curl -X POST -u <deployment_user> --data-binary @"<zip_file_path>" https://<app_name>.scm.azurewebsites.net/api/zipdeploy

You might find that weird, or you might find it elegant. If it's the latter, use it. If it's the former, don't. You can even do it with minimal or no downtime by deploying to a staging slot and use Auto Swap.

I'm going to use the Azure App Service extension in VS Code and it's going to hide all of this and it'll just publish in one click.

Here's the important part if you want it to just work and work easily. You'll want to deploy from a folder that represents your published app. That means your app in a state that it's ready to go.

With .NET Core, the easiest way is to dotnet publish. Then I'm right clicking on that publish folder as seen in this screenshot. Given that the extension is zipping up the target folder and deploying it, I want publish the publish folder, not the root of my source folder.

image

That will actually make a file .vscode/settings.json that will tell VS Code's Azure App Service extension what folder to deploy from in the future, thereby simplifying things.

{
"appService.defaultWebAppToDeploy": "/subscriptions/GUID/resourceGroups/appsvc_rg_Windows_CentralUS/providers/Microsoft.Web/sites/fancyweb1",
"appService.deploySubpath": "bin\\Debug\\netcoreapp2.2\\publish"
}

Below you can see the dialog that pops up "Always deploy the workspace ___ to ___" if you click Yes that will create the setting above specific to your application.

Always deploy the workspace web1 to fancyweb1

Now when I deploy, I can right click from anywhere and it will zipdeploy right to my site. Note the log below.

image

With this extension I can even right click and "Start Streaming Logs" and get output of the logs of your Azure App Service as it runs, right in the output pane of VS Code.

Start Streaming Logs

This will make things pretty easy for my simplest sites and proofs of concept. Give it a try!

Sponsor: Get the latest JetBrains Rider for remote debugging via SSH, SQL injections, a new Search Everywhere popup, and improved Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Exploring nopCommerce - open source e-commerce shopping cart platform in .NET Core

Feb 15, 2019

Description:

nopCommerce demo siteI've been exploring nopCommerce. It's an open source e-commerce shopping cart. I spoke at their conference in New York a few years ago and they were considering moving to open source and cross-platform .NET Core from the Windows-only .NET Framework, so I figured it was time for me to check in on their progress.

I headed over to https://github.com/nopSolutions/nopCommerce and cloned the repo. I have .NET Core 2.2 installed that I grabbed here. You can check out their official site and their live demo store.

It was a simple git clone and a "dotnet build" and it build and ran quite immediately. It's always nice to have a site "just work" after a clone (it's kind of a low bar, but no matter what the language it's always a joy when it works.)

I have SQL Express installed but I could just as easily use SQL Server for Linux running under Docker. I used the standard SQL Server Express connection string: "Server=localhost\SQLEXPRESS;Database=master;Trusted_Connection=True;" and was off and running.

nopCommerce is easy to setup

It's got a very complete /admin page with lots of Commerce-specific reports, the ability to edit the catalog, have sales, manage customers, deal with product reviews, set promotions, and more. It's like WordPress for Stores. Everything you'd need to put up a store in a few hours.

Very nice admin site in nopCommerce

nopCommerce has a very rich plugin marketplace. Basically anything you'd need is there but you could always write your own in .NET Core. For example, if I want to add Paypal as a payment option, there's 30 plugins to choose from!

NOTE: If you have any theming issues (css not showing up) with just using "dotnet build," you can try "msbuild" or opening the SLN in Visual Studio Community 2017 or newer. You may be seeing folders for plugins and themes not being copied over with dotnet build. Or you can "dotnet publish" and run from the publish folder.

Now, to be clear, I just literally cloned the HEAD of the actively developed version and had no problems, but you might want to use the most stable version from 2018 depending on your needs. Either way, nopCommerce is a clean code base that's working great on .NET Core. The community is VERY active, and there's a company behind the open source version that can do the work for you, customize, service, and support.

Sponsor: The next generation of Jira has arrived, with new roadmaps, more flexible boards, overhauled configuration, and dozens of new integrations. Whatever new awaits you, begin it here. In a new Jira.
© 2018 Scott Hanselman. All rights reserved.
     

How to convert an IMG file to an standard ISO easily with Linux on Windows 10

Feb 12, 2019

Description:

Modded Goldstar 3DO for USBThe optical disc drive is giving out on my GoldStar 3DO machine. It's nearly 30 years old. I want to make sure that the kids and I can still play our 3DO discs. I ordered this fantastic USB mod for the 3DO from a fellow out of Belarus. It came and it's great. It includes a game/file selector app that you boot off of if you put it in the root of a FAT32 formatted USB drive.

However, when I cloned my collection of CD-ROMS I ended up with a bunch of IMG files, and this mod wants ISO files. I thought my cloner was going to give me ISOs. I did the obvious thing and googled for "how to convert an img file to an iso."

This plunged me into the hellscape that is CNET and Major Geeks download wrappers. Every useful application or utility out there is hidden on a page filled with Download Now buttons that aren't the button you want OR if you get the app you want, it's actually a Chrome Search hijacker. I just want to convert a damn IMG to an ISO. If you want to do this on Windows you're going to be installing a bunch of virus-laden trial ISO cracking crap.

Fortunately, Windows 10 can run Linux very nicely, thank you very much. Go install Ubuntu from the Windows Store and get set up ASAP.

I just installed ccd2iso inside Ubuntu on my Windows 10 machine.

scott@IRONHEART:~$ sudo apt install ccd2iso
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
ccd2iso
0 upgraded, 1 newly installed, 0 to remove and 27 not upgraded.
Need to get 7406 B of archives.
After this operation, 26.6 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 ccd2iso amd64 0.3-7 [7406 B]
Fetched 7406 B in 0s (21.0 kB/s)
Selecting previously unselected package ccd2iso.
(Reading database ... 61432 files and directories currently installed.)
Preparing to unpack .../ccd2iso_0.3-7_amd64.deb ...
Unpacking ccd2iso (0.3-7) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up ccd2iso (0.3-7) ...
scott@IRONHEART:~$ cd /mnt/c/Users/scott/Desktop/3do/

Then I cd (change directory) into my file system where my IMG backups are. Note that my C:\ drive on Windows is at /mnt/c so you can see me in a folder on my Desktop here. Then just run ccd2iso.

scott@IRONHEART:/mnt/c/Users/scott/Desktop/3do$ ccd2iso AloneInTheDark.img AloneInTheDark.iso
179500 sector written
Done.

Boom. Super fast and does the job and now I'm up and running! Regardless of why you got to this blog post and needed to convert an IMG to an ISO, I hope this helps and saves you some time!

Sponsor: The next generation of Jira has arrived, with new roadmaps, more flexible boards, overhauled configuration, and dozens of new integrations. Whatever new awaits you, begin it here. In a new Jira. 


© 2018 Scott Hanselman. All rights reserved.
     

Lighting up my DasKeyboard with Blood Sugar changes using my body's REST API

Feb 7, 2019

Description:

imageI've long blogged about the intersection of diabetes and technology. From the sad state of diabetes tech in 2012 to its recent promising resurgence, it's clear that we are not waiting.

If you're a Type 1 Diabetic using a CGM - a continuous glucose meter - you'll want to set up Nightscout so you can have a REST API for your sugar. The CGM checks my blood sugar every 5 minutes, it hops via BLE over to my phone and then to the cloud. You'll want your sugars stored in cloud storage that YOU control. CGM vendors have their own cloud, but we can easily bridge over to a MongoDB database.

I run Nightscout in Azure and my body has a REST API. I can do an HTTP GET like this:

/api/v1/entries.json?count=3

and get this

[
{
_id: "5c6066d477b2a69a0a7810e5",
sgv: 143,
date: 1549821626000,
dateString: "2019-02-10T18:00:26.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
},
{
_id: "5c6065a877b2a69a0a7801ce",
sgv: 134,
date: 1549821326000,
dateString: "2019-02-10T17:55:26.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
},
{
_id: "5c60647b77b2a69a0a77f381",
sgv: 130,
date: 1549821026000,
dateString: "2019-02-10T17:50:26.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
}
]

I can change the URL from a .json to a .txt and get this

2019-02-10T18:00:26.000Z 1549821626000 143 Flat
2019-02-10T17:55:26.000Z 1549821326000 134 Flat
2019-02-10T17:50:26.000Z 1549821026000 130 Flat

The "flat" value at the end is part of an enum that can give me a generalized trend value. Diabetics need to manage our sugars at the least hour by hour and sometimes minute by minute. As such it's super important that we have "glanceable displays." That means anything at all that gives me a sense (a sixth sense, if you will) of how I'm doing.

That might be:

Alexa, what's my blood sugar? Adding sugar numbers and trends to your Git/PATH prompt in your shell An Arduino with an LCD
image A wall-mounted dakBoard Family Calendar in a shared space that also shows my blood sugar

I got a Das Keyboard 5Q recently - I first blogged about Das Keyboard in 2006! and noted that it's got it's own local REST API. I'm working on using their Das Keyboard Q software's Applet API to light up just the top row of keys in response to my blood sugar changing. It'll use their Node packages and JavaScript and run in the context of their software.

However, since the keyboard has a localhost REST API and so does my blood sugar, I busted out this silly little shell script. Add a cron job and my keyboard can turn from orange (low), to green, yellow, red (high) as my sugar changes. That provides a nice ambient notifier of how my sugars are doing. Someone on Twitter said "who looks at their keyboard?" I mean, OK, that's just silly. If my entire keyboard turns red I will notice it. Again, ambient. I could certainly add an alert and make a klaxon go off if you'd like.

#!/bin/sh
# This script colorize all LEDs of a 5Q keyboard
# by sending JSON signals to the Q desktop public API.
# based on Blood Sugar values from Nightscout
set -e # quit on first error.
PORT=27301

# Colorize the 5Q keyboard
PID="DK5QPID" # product ID

# Zone are LED groups. There are less than 166 zones on a 5Q.
# This should cover the whole device.
MAX_ZONE_ID=166

# Get blood sugar from Nightscout as TEXT
red=#f00
green=#0f0
yellow=#ff0
#deep orange is LOW sugar
COLOR=#f50
bgvalue=$(curl -s https://MYSITE/api/v1/entries.txt?count=1 | grep -Eo '000\s([0-9]{1,3})+\s' | cut -f 2)
if [ $bgvalue -gt 80 ]
then
COLOR=$green
if [ $bgvalue -gt 140 ]
then
COLOR=$yellow
if [ $bgvalue -gt 200 ]
then
COLOR=$red
fi
fi
fi

echo "Sugar is $bgvalue and color is $COLOR!"

for i in `seq $MAX_ZONE_ID`
do
#echo "Sending signal to zoneId: $i"
# important NOTE: if field "name" and "message" are empty then the signal is
# only displayed on the devices LEDs, not in the signal center
curl -s -S --output /dev/null -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{
"name": "Nightscout",
"id": "'$i'",
"message": "Blood sugar is '$bgvalue'",
"pid": "'$PID'",
"zoneId": "'"$i"'",
"color": "'$COLOR'",
"effect": "SET_COLOR"

}' "http://localhost:$PORT/api/1.0/signals"

done
echo "\nDone.\n\"

This local keyboard API is meant to send a signal to a single zone or key, so it's hacky of me (and them, really) to make 100+ REST calls to color the whole keyboard. But, it's a localhost call and it's not that spendy. This will go away when I move to their new API. Here's a video of it working.

You can also hit the volume button on the keyboard an any "signaled" (lit up) key and get a popup with the actual blood sugar value (that's 'message' in the second curl command above). Again, this is a hack but I'm going to make it a formal applet you can just install from the store. If you want to help (I'm slow) head to the code here https://github.com/shanselman/DasKeyboard-Q-NightScout

Got my keyboard keys changing color *when my blood sugar goes up!* @daskeyboard @NightscoutProj #WeAreNotWaiting #diabetes pic.twitter.com/DSBDcrO7RE

— Scott Hanselman (@shanselman) February 8, 2019

What are some other good ideas for ambient sugar alerts? An LCD strip around the monitor (bias lighting)? A Phillips Hue smart light?

Consider also that you could use the glanceable display idea for pulse, anxiety, blood pressure - anything in your body you could hook up to in real- or near-realtime.

Sponsor: Get the latest JetBrains Rider with Code Vision, Rename Project refactoring, and the Assembly Explorer. Improved support for C#, VB.NET, F#, TypeScript, and Angular is all included.


© 2018 Scott Hanselman. All rights reserved.
     

Teaching Kids to Code with Minecraft Mods made easy using MakeCode and Code Connection

Feb 5, 2019

Description:

Back in the day, making a Minecraft mod was...challenging. It was a series of JAR files and Java hacks and deep folder structures. It was possible, but it wasn't fun and it surely wasn't easy. I wanted to revisit things now that Minecraft is easily installed from the Windows Store.

Today, it couldn't be easier to make a Minecraft Mod, so I know what my kids and I are doing tonight!

I headed over to https://minecraft.makecode.com/setup/minecraft-windows10 and followed the instructions. I already have Minecraft installed, so I just had to install the Minecraft Code Connection app. The architecture here is very clean and clever. Basically you turn on cheats in Minecraft and use a local websockets connection between the Code Connection app and Minecraft - you're automating Minecraft from an external application!

Here I'm turning on cheats in a new Miencraft world:

Minecraft Allow Cheats

Then from the Code Connection app, I get a URL for the automation server, then go back to Minecraft, hit "t" and paste it in the URL. Now the two apps are talking to each other.

Connecting Minecraft to MakeCode

I can automate with MakeCode, Scratch, or other editors. I'll do MakeCode.

Make Code is amazing

Then an editor opens. This is the same base open source Make Code editor I used when I was coding for an Adafruit Circuit Playground Express earlier this year.

Now, I'll setup a chat command in Make Code that makes it rain chickens when I type the chat command "chicken." It runs a loop and spawns 100 chickens 10 blocks above my character's head.

Chicken rain

I was really surprised how easy this was. It was maybe 10 mins end to end, which is WAY easier than the Java add-ins I learned about just a few years ago.

Minecraft Chicken Rain

There are a ton of tutorials here, including Chicken Rain. https://minecraft.makecode.com/tutorials

The one I'm most excited to show my kids is the Agent. Your connection to the remote Code Connection app includes an avatar or "agent." Just like Logo (remember that, robot turtles?) you can control your agent and make him build stuff. No more tedious house building for us! Let's for-loop our way to glory and teach dude how to make us a castle!

Sponsor: Get the latest JetBrains Rider with Code Vision, Rename Project refactoring, and the Assembly Explorer. Improved support for C#, VB.NET, F#, TypeScript, and Angular is all included.


© 2018 Scott Hanselman. All rights reserved.
     

Brainstorming - Creating a small single self-contained executable out of a .NET Core application

Feb 1, 2019

Description:

I've been using ILMerge and various hacks to merge/squish executables together for well over 12 years. The .NET community has long toyed with the idea of a single self-contained EXE that would "just work." No need to copy a folder, no need to install anything. Just a single EXE.

While work and thought continues on a CoreCLR Single File EXE solution, there's a nice Rust tool called Warp that creates self-contained single executables. Warp is cross-platform, works on any tech, and is very clever

The Warp Packer app has a slightly complex command line, like this:

.\warp-packer --arch windows-x64 --input_dir bin/Release/netcoreapp2.1/win10-x64/publish --exec myapp.exe --output myapp.exe

Fortunately Hubert Rybak has created a very nice "dotnet-pack" global tool that wraps this all up into a single command, dotnet-pack.

NOTE: There is already a "dotnet pack" command so this dotnet-pack global tool is unfortunately named. Just be aware they are NOT the same thing.

All you have to do is this:

C:\supertestweb> dotnet tool install -g dotnet-pack
C:\supertestweb> dotnet-pack
O Running Publish...
O Running Pack...

In this example, I just took a Razor web app with "dotnet new razor" and then packed it up with this tool using Warp packer. Now I've got a 40 meg self-contained app. I don't need to install anything, it just works.

C:\supertestweb> dir
Directory: C:\supertestweb

Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/6/2019 9:14 AM bin
d----- 2/6/2019 9:14 AM obj
d----- 2/6/2019 9:13 AM Pages
d----- 2/6/2019 9:13 AM Properties
d----- 2/6/2019 9:13 AM wwwroot
-a---- 2/6/2019 9:13 AM 146 appsettings.Development.json
-a---- 2/6/2019 9:13 AM 157 appsettings.json
-a---- 2/6/2019 9:13 AM 767 Program.cs
-a---- 2/6/2019 9:13 AM 2115 Startup.cs
-a---- 2/6/2019 9:13 AM 294 supertestweb.csproj
-a---- 2/6/2019 9:15 AM 40982879 supertestweb.exe

Now here's what it gets interesting. Let's say I have a console app. Hello World, packed with Warp, ends up being about 35 megs. But if I use the "dotnet-pack -l aggressive" the tool will add the Mono ILLinker (tree shaker/trimmer) and shake off all the methods that aren't needed. The resulting single executable? Just 9 megs compressed (20 uncompressed).

C:\squishedapp> dotnet-pack -l aggressive
O Running AddLinkerPackage...
O Running Publish...
O Running Pack...
O Running RemoveLinkerPackage...
C:\squishedapp> dir
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/6/2019 9:32 AM bin
d----- 2/6/2019 9:32 AM obj
-a---- 2/6/2019 9:31 AM 47 global.json
-a---- 2/6/2019 9:31 AM 193 Program.cs
-a---- 2/6/2019 9:32 AM 178 squishedapp.csproj
-a---- 2/6/2019 9:32 AM 9116643 squishedapp.exe

Here is where you come in!

NOTE: The .NET team has planned to have a "single EXE" supported packing solution built into .NET 3.0. There's a lot of ways to do this. Do you zip it all up with a header/unzipper? Well, that would hit the disk a lot and be messy. Do you "unzip" into memory? Do you merge into a single assembly? Or do you try to AoT (Ahead of Time) compile and do as much work as possible before you merge things? Is a small size more important than speed?

What do you think? How should a built-in feature like this work and what would YOU focus on?

Sponsor: Check out Seq 5 for real-time diagnostics from ASP.NET Core and Serilog, now with faster queries, support for Docker on Linux, and beautiful new dark and light themes.


© 2018 Scott Hanselman. All rights reserved.
     

Visiting The National Museum of Computing inside Bletchley Park - Can we crack Enigma with Raspberry Pis?

Jan 29, 2019

Description:

image"The National Museum of Computing is a museum in the United Kingdom dedicated to collecting and restoring historic computer systems. The museum is based in rented premises at Bletchley Park in Milton Keynes, Buckinghamshire and opened in 2007" and I was able to visit it today with my buddies Damian and David. It was absolutely brilliant.

I'd encourage you to have a listen to my 2015 podcast with Dr. Sue Black who used social media to raise awareness of the state of Bletchley Park and help return the site to solvency.

The National Museum of Computing is a must-see if you are ever in the UK. It was a short 30ish minute train ride up from London. We spent the whole afternoon there.

There is a rebuild of the Colossus, the the world's first electronic computer. It had a single purpose: to help decipher the Lorenz-encrypted (Tunny) messages between Hitler and his generals during World War II. The Colossus Gallery housing the rebuild of Colossus tells that remarkable story.

A working Bombe machine

The backside of the Bombe

National Computing Museum

Cipher Machine

We saw the Turing-Welchman Bombe machine, an electro-mechanical device used to break Enigma-enciphered messages about enemy military operations during the Second World War. They offer guided tours (recommended as the volunteers have encyclopedic knowledge) and we were able to encrypt a message with the German Enigma (there's a 90 second video I made, here) and decrypt it with the Bombe, which is effectively 12 Enigmas working in parallel, backwards.

Inside the top lid of a working EngimaA working Engima

It's worth noting - this from their website - that the first Bombe, named Victory, started code-breaking on Bletchley Park on 14 March 1940 and by the end of the war almost 1676 female WRNS and 263 male RAF personnel were involved in the deployment of 211 Bombe machines. The museum has a working reconstructed Bombe.

 

Now, decrypting the Enigma with the Bombe (and a crib, and some analysis, before the brute force starts) pic.twitter.com/OJsoMYXYhj

— Scott Hanselman (@shanselman) January 31, 2019

I wanted to understand the computing power these systems had then, and now. Check out the website where you can learn about the OctaPi - a Raspberry Pi array of eight Pis working together to brute-force Enigma. You can make your own here!

I hope you enjoy these pics and videos and I hope you one day get to enjoy the history and technology in and around Bletchley Park.

Sponsor: Check out Seq 5 for real-time diagnostics from ASP.NET Core and Serilog, now with faster queries, support for Docker on Linux, and beautiful new dark and light themes.


© 2018 Scott Hanselman. All rights reserved.
     

NuGet's fancy older sibling FuGet gives you a whole new view of the .NET packaging ecosystem

Jan 25, 2019

Description:

FuGet diffsI remember when we announced NuGet (almost 10 years ago). Today you can get your NuGet packages (that contain .NET libraries) from Nuget.exe, from within Visual Studio, from the .NET CLI (command line interface), and from Paket. Choice is good!

Most folks are familiar with NuGet.org but have you used FuGet?

FuGet is "pro nuget package browsing!" Creating by the amazing Frank A. Krueger - of whom I am an immense fan - FuGet offers a different view on the NuGet package library. NuGet is a repository of nearly 150,000 open source libraries and the NuGet Gallery does a decent job of letting one browse around. However, https://github.com/praeclarum/FuGetGallery is an alternative web UI with a lot more depth.

FuGet is "advanced mode" for NuGet. It's a package browser combined with an API browser that helps you explore the XML documentation and metadata of a package's assemblies to help you explore and learn. And it's a JOY.

For example, if I look at https://www.fuget.org/packages/Newtonsoft.Json I can also see who depends on the package! https://www.fuget.org/packages/Newtonsoft.Json/dependents Who has taken a public dependency on your package? I can see supported frameworks, namepsaces, as well as internal types. For example, I can explore JToken within Newtonsoft.Json and its embedded docs!

You can even do API diffs across versions! Check out https://www.fuget.org/packages/Serilog/2.8.0-dev-01042/lib/netstandard2.0/diff/2.6.0/ for example. This is an API Diff between 2.8.0-dev-01042 and 2.6.0 for Serilog. This could be useful for users or package maintainers when deciding how big a version bumb is required depending on how much of the API has changed. It also gives you a view (as the downstream consumer) of what's coming at you in pre-release versions!

From Frank's blog:

Have you ever wondered if the library your using has been customized for a certain platform? Have you wondered if it will work on your platform at all?

This doubt is removed by displaying - in full technicolor - all the frameworks that the library supports.  Supported Frameworks

They’re color coded so you can see at a glance: Green libraries are .NET Standard and will work everywhere Dark blue libraries are platform specific Light blue libraries are for full .NET and Mono only Yellow libraries are old PCLs that we’re all trying to forget

FuGet.org is a fanstatic addition to the .NET ecosystem and I"d encourage you to bookmark it, use it, support it, and get involved!

If you're interesting in stuff like this (and the code that runs stuff like this) also check out Stephen Cleary's useful http://dotnetapis.com/ and it's associated code on GitHub https://github.com/StephenClearyApps/DotNetApis.

Sponsor: Your code is bad, but that’s ok thanks to Sentry’s full stack error monitoring that enables you to track and fix application errors in real time. Stop garbage code from becoming garbage fires.


© 2018 Scott Hanselman. All rights reserved.
     

How to use Windows 10's built-in OpenSSH to automatically SSH into a remote Linux machine

Jan 23, 2019

Description:

In working on getting Remote debugging with VS Code on Windows to a Raspberry Pi using .NET Core on ARM in my last post, I was looking for optimizations and realized that I was using plink/putty for my SSH tunnel. Putty is one of those tools that we (as developers) often take for granted, but ideally I could do stuff like this without installing yet another tool. Being able to use out of the box tools has a lot of value.

A friend pointed out this part where I'm using plink.exe to ssh into the remote Linux machine to launch the VS Debugger:

"pipeTransport": {
"pipeCwd": "${workspaceFolder}",
"pipeProgram": "${env:ChocolateyInstall}\\bin\\PLINK.EXE",
"pipeArgs": [
"-pw",
"raspberry",
"root@crowpi.lan"
],
"debuggerPath": "/home/pi/vsdbg/vsdbg"
}

I could use Linux/bash that's built into Windows 10 for years now. As you may know, Windows 10 can run many Linuxes out of the box. If I have a Linux distro configured, I can call Linux commands locally from CMD or PowerShell. For example, here you see I have three Linuxes and one is the default. I can call "wsl" and any command line is passed in.

C:\Users\scott> wslconfig /l
Windows Subsystem for Linux Distributions:
Ubuntu-18.04 (Default)
WLinux
Debian
C:\Users\scott> wsl ls ~/
forablog forablog.2 forablog.2.save forablog.pub myopenaps notreal notreal.pub test.txt

So theoretically I could "wsl ssh" and use that Linux's ssh, but again, requires setup and it's a little silly. Windows 10 now supports OpenSSL already!

Open an admin PowerShell to see if you have it installed. Here I have the client software installed but not the server.

C:\> Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'

Name : OpenSSH.Client~~~~0.0.1.0
State : Installed

Name : OpenSSH.Server~~~~0.0.1.0
State : NotPresent

You can then add the client (or server) with this one-time command:

Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

You'll get all the standard OpenSSH stuff that one would want.

OpenSSL tools on Windows

Let's say now that I want to be able to ssh (shoosh!) into a remote Linux machine using PGP keys rather than with a password. It's much more convenient and secure. I'll be ssh'ing with my Windows SSH into a remote Linux machine. You can see where ssh is installed:

C:\Users\scott>where ssh
C:\Windows\System32\OpenSSH\ssh.exe

Level set - What are we doing and what are we trying to accomplish?

I want to be able to type "ssh pi@crowpi" from my Windows machine and automatically be logged in.

I will

Make a key on my Window machine. The FROM. I want to ssh FROM here TO the Linux machine. Tell the Linux machine (by transferring it over) about the public piece of my key and add it to a specific user's allowed_keys. PROFIT

Here's what I did. Note you can do this is several ways. You can gen the key on the Linux side and scp it over, you can use a custom key and give it a filename, you can use a password as you like. Just get the essence right.

Below, note that when the command line is C:\ I'm on Windows and when it's $ I'm on the remote Linux machine/Raspberry Pi.

gen the key on Windows with ssh-keygen I ssh'ed over to Linux and note I'm prompted for a password, as expected. I "ls" to see that I have a .ssh/ folder. Cool. You can see authorized_keys is in there, you may or may no have this file or folder. Make the ~/.ssh folder if you don't. Exit out. I'm in Windows now. Look closely here. I'm "scott" on Windows so my public key is in c:\users\scott\.ssh\id_rsa.pub. Yours could be in a file you named earlier, be conscious. I'm type'ing (cat on Linux is type on Windows) that text file out and piping it into SSH where I login that remote machine with the user pi and I then cat (on the Linux side now) and append >> that text to the .ssh/authorized_keys folder. The ~ folder is implied but could be added if you like. Now when I ssh pi@crowpi I should NOT be prompted for a password.

Here's the whole thing.

C:\Users\scott\Desktop> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (C:\Users\scott/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in C:\Users\scott/.ssh/id_rsa.
Your public key has been saved in C:\Users\scott/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:x2vJHHXwosSSzLHQWziyx4II+scott@IRONHEART
The key's randomart image is:
+---[RSA 2048]----+
| . .... . |
|..+. .=+=. o |
| .. |
+----[SHA256]-----+
C:\Users\scott\Desktop> ssh pi@crowpi
pi@crowpi's password:
Linux crowpi 2018 armv7l

pi@crowpi:~ $ ls .ssh/
authorized_keys id_rsa id_rsa.pub known_hosts
pi@crowpi:~ $ exit
logout
Connection to crowpi closed.
C:\Users\scott\Desktop> type C:\Users\scott\.ssh\id_rsa.pub | ssh pi@crowpi 'cat >> .ssh/authorized_keys'
pi@crowpi's password:
C:\Users\scott\Desktop> ssh pi@crowpi
pi@crowpi: ~ $

Fab. At this point I could go BACK to my Windows' Visual Studio Code launch.json and simplify it to NOT use Plink/Putty and just use ssh and the ssh key management that's included with Windows.

"pipeTransport": {
"pipeCwd": "${workspaceFolder}",
"pipeProgram": "ssh",
"pipeArgs": [
"pi@crowpi.lan"
],
"debuggerPath": "/home/pi/vsdbg/vsdbg"
}

Cool!

NOTE: In my previous blog post some folks noted I am logging in as "root." That's an artifact of the way that .NET Core is accessing the GPIO pins. That won't be like that forever.

Thoughts? I hope this helps someone.

Sponsor: Your code is bad, but that’s ok thanks to Sentry’s full stack error monitoring that enables you to track and fix application errors in real time. Stop garbage code from becoming garbage fires.


© 2018 Scott Hanselman. All rights reserved.
     

Remote debugging with VS Code on Windows to a Raspberry Pi using .NET Core on ARM

Jan 18, 2019

Description:

I've been playing with my new "CrowPi" from Elecrow. It's a great Raspberrry Pi STEM kit that is entirely self-contained in a small case. It includes a touch screen and a TON of sensors, LCDs, matrix display, sensors, buzzers, breadboard, etc.

NOTE: I talked to the #CrowPi people and they gave me an Amazon COUPON that's ~$70 off! The coupon is 8EMCVI56 and will work until Jan 31, add it during checkout. The Advanced Kit is at https://amzn.to/2SVtXl2 #ref and includes everything, touchscreen, keyboard, mouse, power, SNES controllers, motors, etc. I will be doing a full review soon. Short review is, it's amazing.

I was checking out daily builds of the new open source .NET Core System.Device.Gpio that lets me use C# to talk to the General Purpose Input/Output pins (GPIO) on the Raspberry Pi. However, my "developer's inner loop" was somewhat manual. The developer's inner loop is that "write code, run code, change code" loop that we all do. If you find yourself typing repetitive commands that deploy or test your code but don't write new code, you'll want to try to optimize that inner loop and get it down to one keystroke (or zero in the case of automatic test).

Rasbperry Pi Debugging with VS CodeIn my example, I was writing my code in Visual Studio Code on my Windows machine, building the code locally, then running a "publish.bat" that would scp (secure copy) the resulting binaries over to the Raspberry Pi. Then in another command prompt that was ssh'ed into the Pi, I would chmod the resulting binary and run it. This was tedious and annoying, however as programmers sometimes we stop noticing it and just put up with the repetitive motion.

A good (kind of a joke, but not really) programmer rule of thumb is - if you do something twice, automate it.

I wanted to be able not only to make the deployment automatic, but also ideally I'd be able to interactively debug my C#/.NET Core code remotely. That means I'm writing C# in Visual Studio Code on my Windows machine, I hit "F5" to start a debug session and my app is compiled, published, run, and I attached to a remote debugger running on the Raspberry Pi, AND I'm dropped into a debugging session with a breakpoint set. All with one keystroke. This is common practice with local apps, but for remote apps - and ones that span two CPU architectures - it can take a smidge of setup.

Starting with instructions here: https://github.com/OmniSharp/omnisharp-vscode/wiki/Attaching-to-remote-processes and here: https://github.com/OmniSharp/omnisharp-vscode/wiki/Remote-Debugging-On-Linux-Arm and a little help from Jose Perez Rodriguez at work, here's what I came up with.

Setting up Remote Debugging from Visual Code on Windows to a Raspberry Pi running C# and .NET Core

First, I'm assuming you've got .NET Core on both your Windows machine and Raspberry Pi. You've also installed Visual Studio Code on you Windows machine and you've installed the C# extension.

On the Raspberry Pi

I'm ssh'ing into my Pi from Windows 10. Windows 10 includes ssh out of the box now, but you can also ssh from WSL (Windows Subsystem for Linux).

Install the VS remote debugger on your Pi by running this command:
curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l ~/vsdbg ​To debug you will need to run the program as root, so we'll need to be able to remote launch the program as root as well. For this, we need to first set a password for the root user in your pi, which you can do by running:
sudo passwd root Then we need to enable ssh connections using root, by running :
sudo nano /etc/ssh/sshd_config        
and adding a line that reads:
PermitRootLogin yes reboot the pi: sudo reboot

VSDbg looks like this getting installed:

pi@crowpi:~/Desktop/rpitest$ curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l ~/vsdbg
Info: Creating install directory
Using arguments
Version : 'latest'
Location : '/home/pi/vsdbg'
SkipDownloads : 'false'
LaunchVsDbgAfter : 'false'
RemoveExistingOnUpgrade : 'false'
Info: Using vsdbg version '16.0.11220.2'
Info: Previous installation at '/home/pi/vsdbg' not found
Info: Using Runtime ID 'linux-arm'
Downloading https://vsdebugger.azureedge.net/vsdbg-16-0-11220-2/vsdbg-linux-arm.zip
Info: Successfully installed vsdbg at '/home/pi/vsdbg'

At this point I've got vsdbg installed. You can go read about the MI Debug Engine here. "The Visual Studio MI Debug Engine ("MIEngine") provides an open-source Visual Studio Debugger extension that works with MI-enabled debuggers such as gdb, lldb, and clrdbg."

On the Windows Machine

Note that there are a half dozen ways to do this. Since I had a publish.bat already that looked like this, after installing putty with "choco install putty" on my Windows machine. I'm a big fan of pushd and popd and I'll tell you this, they aren't used or known enough.

dotnet publish -r linux-arm /p:ShowLinkerSizeComparison=true
pushd .\bin\Debug\netcoreapp2.1\linux-arm\publish
pscp -pw raspberry -v -r .\* pi@crowpi.lan:/home/pi/Desktop/rpitest
popd

On Windows, I want to add two things to my .vscode folder. I'll need a launch.json that has my "Launch target" and I'll need some tasks in my tasks.json to support that. I added the "publish" task myself. My publish task calls out to publish.bat. It could also do the stuff above if I wanted. Note that I made publish "dependsOn" build, and I removed/cleared problemMatcher. If you wanted, you could write a regEx that would detect if the publish failed.

{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/rpitest.csproj"
],
"problemMatcher": "$msCompile"
},
{
"label": "publish",
"type": "shell",
"dependsOn": "build",
"presentation": {
"reveal": "always",
"panel": "new"
},
"options": {
"cwd": "${workspaceFolder}"
},
"windows": {
"command": "${cwd}\\publish.bat"
},
"problemMatcher": []
}
]
}

Then in my launch.json, I have this to launch the remote console. This can be a little confusing because it's mixing paths that are local to Windows with paths that are local to the Raspberry Pi. For example, pipeProgram is using the Chocolatey installation of Putty's Plink. But program and args and cwd are all remote (or local to) the Raspberry Pi.

"configurations": [
{
"name": ".NET Core Launch (remote console)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "/home/pi/dotnet/dotnet",
"args": ["/home/pi/Desktop/rpitest/rpitest.dll"],
"cwd": "/home/pi/Desktop/rpitest",
"stopAtEntry": false,
"console": "internalConsole",
"pipeTransport": {
"pipeCwd": "${workspaceFolder}",
"pipeProgram": "${env:ChocolateyInstall}\\bin\\PLINK.EXE",
"pipeArgs": [
"-pw",
"raspberry",
"root@crowpi.lan"
],
"debuggerPath": "/home/pi/vsdbg/vsdbg"
}
}

Note the debugger path lines up with the location above that we installed vsdbg.

Remote debugging with VS Code on Windows to a Raspberry Pi using .NET Core

It's worth pointing out that while I'm doing this for C# it's not C# specific. You could setup remote debugging with VS Code using these building blocks with any environment.

The result here is that my developer's inner loop is now just pressing F5! What improvements would YOU make?

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.
© 2018 Scott Hanselman. All rights reserved.
     

Installing the .NET Core 2.x SDK on a Raspberry Pi and Blinking an LED with System.Device.Gpio

Jan 16, 2019

Description:

The CrowPi from Elecrow is an amazing STEM KitI've written about running .NET Core on Raspberry Pis before, although support was initially limited. Now that Linux ARM32 is a supported distro, what else can we do?

We can certainly quickly and easily install Docker on a Raspberry Pi and be running C# and .NET Core programs in minutes. We can run .NET Core in a stack of Raspberry Pis as a Kubernetes Cluster, making our own tiny cloud and install a serverless platform in it like OpenFaas!

If you have a Raspberry Pi 3 with Raspbian on it like I do, check out https://dotnet.microsoft.com/download/dotnet-core/2.2 and note that last part of the URL. You can ask for /2.1, /2.0, etc, just in case you're reading this post in the future, like tomorrow. ;) Everything is always at https://dotnet.microsoft.com/download/archives so you can tell what's Current and what's not.

For example, if I end up here https://dotnet.microsoft.com/download/thank-you/dotnet-sdk-2.2.102-linux-arm32-binaries I can grab the exact blob URL from the "try again" link and then wget it on my Raspberry Pi. You'll want to get a few prerequisites first. Note these blob links change when new stuff comes out, so you'll want to double check to get latest.

sudo apt-get install curl libunwind8 gettext
wget https://download.visualstudio.microsoft.com/download/pr/9650e3a6-0399-4330-a363-1add761127f9/14d80726c16d0e3d36db2ee5c11928e4/dotnet-sdk-2.2.102-linux-arm.tar.gz
wget https://download.visualstudio.microsoft.com/download/pr/9d049226-1f28-4d3d-a4ff-314e56b223c5/f67ab05a3d70b2bff46ff25e2b3acd2a/aspnetcore-runtime-2.2.1-linux-arm.tar.gz

I got the Linux ARM 32-bit SDK as well as the ASP.NET Runtime so I have those packages available for any web apps I choose to make.

Then we'll extract. You can set it up as a user off of $HOME or in /opt/dotnet and then link to /usr/local/bin.

mkdir -p $HOME/dotnet && tar zxf dotnet-sdk-2.2.102-linux-arm.tar.gz -C $HOME/dotnet
export DOTNET_ROOT=$HOME/dotnet
export PATH=$PATH:$HOME/dotnet

Don't forget to untar the ASP.NET Runtime as well.

tar zxf aspnetcore-runtime-2.2.1-linux-arm.tar.gz -C $HOME/dotnet

Cool. You will want to add the PATH to your profile if you want it to survive restarts. Then run "dotnet --info" to see if it works.

pi@crowpi:~ $ dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 2.2.102

Runtime Environment:
OS Name: raspbian
OS Version: 9
OS Platform: Linux
RID: linux-arm
Base Path: /home/pi/dotnet/sdk/2.2.102/

Host (useful for support):
Version: 2.2.1

.NET Core SDKs installed:
2.2.102 [/home/pi/dotnet/sdk]

.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.2.1 [/home/pi/dotnet/shared/Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.2.1 [/home/pi/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.2.1 [/home/pi/dotnet/shared/Microsoft.NETCore.App]

Looks good.

At this point I have BOTH the .NET Core runtime (for running stuff) as well as all the ASP.NET runtime for web apps or little microservices AND the .NET SDK which means I can actually compile code (slowly) on the Pi itself. It's up to me/you. If you aren't ever going to develop (compile code) on the Raspberry Pi, you can just install the runtime, but I think it's nice to be prepared.

I am installing all this on a wonderful Raspberry Pi kit called a "CrowPi." They had a successful KickStarter and are now selling a Raspberry Pi Educational Kit with an attached custom board with dozens of components. Rather than having to connect motion sensors, soud sensors, touch sensors, switches, buttons, and carry around a bunch of wires, you can experiment and play with stuff in a very organized case that also has a 7inch HDMI touch screen. They also have 21 great Python Video Courses on their YouTube Channel on how to get started with hardware. It's a joy of a device. More on that later.

NOTE: I talked to the #CrowPi people and they gave me an Amazon COUPON that's ~$70 off! The coupon is 8EMCVI56 and will work until Jan 31, add it during checkout. The Advanced Kit is at https://amzn.to/2SVtXl2 #ref and includes everything, touchscreen, keyboard, mouse, power, SNES controllers, motors, etc. I will be doing a full review soon. Short review is, it's amazing.

Now that .NET Core is installed, I can start exploring the fun happening over at https://github.com/dotnet/iot. It's filled with lots of new functionality inside of System.Device.Gpio. Remember that GPIO means "General Purpose Input/Output" which, on a Raspberry Pi, is connected to a ribbon cable on the CrowPi with lots of cool sensors ready to go!

I could build my Raspberry Pi apps on my Windows/Mac/Linux machine and I'll find it much faster to compile. Then I can "scp" (secure copy) it over to the Pi. It's nice to point out that Windows 10 includes scp.exe now by default!

In this example, by adding -r linux-arm I'm copying a complete self-contained app over the Pi, so don't actually need to install .NET Core like I did above. If instead, I didn't use -r (to declare a specific runtime) then I would need to make sure I've got the right versions on my dev box vs my RPi, so consider what's best for you.

Here I am in my Windows machine that also has the same version of the .NET Core SDK installed. I'm in .\rpitest with a console app I made with "dotnet new console." Now I want to build and copy it over to the Pi.

dotnet publish -r linux-arm
cd bin\Debug\netcore2.1\linux-arm\publish
scp -r . pi@crowpi:/home/pi/Desktop/rpitest

From the Pi, I'll need to "sudo chmod +x" the rpitest application to make sure it is executable.

There's a brilliant video from Cam Soper that shows you in great detail how to run .NET Core 2.x on a Raspberry Pi and I recommend you check it out as well.

IoT devices expose much more than serial ports. They typically expose multiple kinds of pins that can be programmatically used to read sensors, drive LED/LCD/eInk displays and communicate with our devices. .NET Core now has APIs for GPIO, PWM, SPI, and I²C pin types.

These APIs are available via the System.Device.GPIO NuGet package. It will be supported for .NET Core 2.1 and later releases. There's some basic samples here https://github.com/dotnet/iot/blob/master/samples/README.md to start with.

From Microsoft:

Most of our effort has been spent on supporting these APIs in Raspberry Pi 3. We plan to support other devices, like the Hummingboard. Please tell us which boards are important to you. We are in the process of testing Mono on the Raspberry Pi Zero.

For now System.Device.Gpio is a prelease so you'll want to add a nuget.config to your project with the path to the dailies:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear />
<add key="myget.org" value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json" />
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
</packageSources>
</configuration>

Add a reference to System.Device.Gpio or (at the time of this writing) version 0.1.0-prerelease.19065.1. Now let's do something!

Here I'm just blinking this LED!

Console.WriteLine("Hello World!");
GpioController controller = new GpioController(PinNumberingScheme.Board);
var pin = 37;
var lightTime = 300;

controller.OpenPin(pin, PinMode.Output);
try {
while (true) {
controller.Write(pin, PinValue.High);
Thread.Sleep(lightTime);
controller.Write(pin, PinValue.Low);
Thread.Sleep(lightTime);
}
}
finally {
controller.ClosePin(pin);
}

Yay! Step zero works! Every cool IoT projects starts with a blinking LED!

Blinking LEDs ZOMG

Do be aware that System.Device.Gpio is moving VERY fast and some of this code and the samples may not work if namespaces or class names change. It'll settle down soon.

Great stuff though! Go get involved over at https://github.com/dotnet/iot as they are actively working on drivers/abstractions for Windows, Linux, etc and you could even submit a PR for a device like an LCD or simple sensor! I've only been playing for an hour but I will report back as I try new experiments with my kids.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

How to update the firmware on your Zune, without Microsoft, dammit.

Jan 11, 2019

Description:

A glorious little ZuneAs I said on social media today, it's 2019 and I'm updating the Firmware on a Zune, fight me. ;) There's even an article on Vice about the Zune diehards. The Zune is a deeply under-respected piece of history and its UI marked the start of Microsoft's fluent design.

Seriously, though, I got this Zune and it's going to be used by my 11 year old because I don't want him to have a phone yet. He's got a little cheap no-name brand MP3 player and he's filled it up and basically outgrown it. I could get him an iPod Touch or something but he digs retro things (GBC, GBA, etc) so my buddy gave me a Zune in the box. Hasn't been touched...but it has a super old non-metro UI firmware.

Can a Zune be updated in 2019? Surely it can. Isn't Zune dead? I hooked up a 3D0 to my 4k flatscreen last week, so it's dead when I say it's dead.

IMPORTANT UPDATE: After I spent time doing this out I found out on Twitter that there's a small but active Zune community on Reddit! Props for them to doing this in several ways as well. The simplest way to update today is to point resources.zune.net to zuneupdate.com's IP address in your hosts file. The way I did it does use the files directly from Microsoft and gives you full control, but it's overly complex for regular folks for as long as the zuneupdate.com server exists as a mirror. Use the method that works easier for you and that you trust and understand!

First, GET ZUNE: the Zune Software version 4.8 is up at the Microsoft Download Center and it installs just fun on Windows 10. I've also made a copy in my Dropbox if this ever disappears. You should too! Second, GET FIRMWARE: the Zune Firmware is still on the Microsoft sites as well. This is an x86 MSI so don't bother trying to install it, we're going to open it up like an archive instead. Save this file forever. There's a half dozen ways to crack open an MSI. Since not everyone who will read this blog is a programmer, the easiest ways is Download lessmsi and use it to to the open and extract the firmware MSI. It's just an MSI specific extractor but it's nicer than 7zip because it extracts the files with the correct names. If you use Chocolatey, it's just "choco install lessmsi" then run "lessmsi-gui." LessMSI will put the files in a deep folder structure. You'll want to move them and have all your files right at the top of c:\users\YOURNAME\downloads\zunestuff. We will make some other small changes a little later on here.
LessMSI If you really want to, you could install 7zip and extract the contents of the Zune Firmware MSI into a new folder but I don't recommend it as you'll need to rename the files and give them the correct extensions. NERDS: you can also use msiexec from the command line, but I'm trying to keep this super simple. Third, FAKE THE ZUNE UPDATE SERVER: Since the Zune servers are gone, you need to pretend to be the old Zune Server. The Zune Software will "phone home" to Microsoft at resources.zune.net (which is gone) to look for firmware. Since the Zune software was made in a simpler time (a decade ago) it doesn't use SSL or do any checking for the cert to confirm the identity of the Zune server. This would be sad in 2019, but it's super useful to us when bringing this old hardware back to life. Again, there's as half dozen ways to do this. Feel feel to do whatever makes you happy as an HTTP GET is an HTTP GET, isn't it? NERDS: If you use Fiddler or any HTTP sniffer you can launch the Zune software and see it phone home for resources.zune.net/firmware/v4_5/zuneprod.xml and get a 404. It if had found this, it'd look at your Zune model and then figure out which cab (cabinet) archive to get the firmware from. We can easily spoof this HTTP GET. NERDS^2: Why didn't I use the Fiddler Autoresponder to record and replay the HTTP GETS? I tried. However, there's a number of different files that the Zune software could request and I only have the one Zune and I couldn't figure out how to model it in Fiddler. If I could do this, we could just install Fidder and avoid editing the hosts file AND using a tiny web server. From an admin command prompt, run notepad \windows\system32\drivers\etc\hosts and add this line:
127.0.0.1 resources.zune.net This says "if you ever want stuff from resources.zune.net, I'll handle it myself." Who is "myself?" It's our computer! It'll be a little web server you (or I) will run on our own, on localhost AKA 127.0.0.1. Now download dot.net core, it's small and fast to install programming environment. Don't worry, we aren't coding, we are just using the tools it includes. It won't mess up your machine or install anything at startup. Grab any 2.x .NET SDK from https://dot.net and install it from an MSI. Then go to a command prompt and run these commands. first we'll run dotnet once to warm it up, then get the server and run it from our zunestuff folder. We'll install a tiny static webserver called dotnet serve. See below:
dotnet
dotnet tool install --global dotnet-serve
cd c:\users\YOURNAME\downloads\zunestuff
dotnet serve -p 80 If you get any errors that dotnet serve can't be found, just close the command prompt and open it again to update your PATH. If you get errors that port 80 is open, be sure to stop IIS or Skype Desktop or anything that might be listening on port 80. Now, remember where I said you'd extract all those cabs and files out of the Firmware MSI? BUT when we load the Zune software and watch network traffic, we see it's asking for resources.zune.net/firmware/v4_5/zuneprod.xml. We need to answer (since Zune is gone, it's on us now) You'll want to make folders like this: C:\users\YOURNAME\downloads\zunestuff\firmware\v4_5 copy/rename copy FirmwareUpdate.xml to zuneprod.xml and have it live in that directory. It'll look like this:
A file heirarchy under zunestuff The zuneprod.xml file has relative URls inside like this, one for each model of the Zune that maps from USB hardware id to cab file. Open zuneprod.xml in a text editor and add http://resources.zune.net/ before each of the firmware file cabinets. For example if you're using notepad, your find and replace will look like this.
Replace URL=" with URL="http://resources.zune.net/

<FirmwareUpdate DeviceClass="1"
FamilyID="3"
HardwareID="USB\Vid_045e&amp;Pid_0710&amp;Rev_0300"
Manufacturer="Microsoft"
Name="Zune"
Version="03.30.00039.00-01620"
URL="DracoBaseline.cab">

UPDATE: As mentioned above, I did all this work (about an hour of traffic sniffing) and spoofed the server locally then found out that someone made http://zuneupdate.com as an online static spoof! It also doesn't use HTTPS, and if you'd like, you can skip the local spoof and point your your \windows\system32\drivers\etc\hosts with an entry pointing resources.zune.net to its IP address - which at the time of this writing was 66.115.133.19. Your hosts file would look like this, in that case. If this useful resource ever goes away, use the localhost hack above.
66.115.133.19 resources.zune.net Now run the Zune software, connect your Zune. Notice here that I know it's working because I launch the Zune app and go to Settings | Device then Update and I can see dotnet serve in my other window serving the zuneprod.xml in response.

Required Update

It's worth pointing out that the original Zune server was somewhat smart and would only return firmware if we NEEDED a firmware update. Since we are faking it, we always return the same response. You may be prompted to install new firmware if you manually ask for an update. But you only need to get on the latest (3.30 for my brown Zune 30) and then you're good...forever.

image

Enjoy!

Your iPod SucksZune is the way

Guardians 2 Zune

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Relationship Hacks: Playing video games and having hobbies while avoiding resentment

Jan 7, 2019

Description:

Super Nintendo Controller from PexelsI'm going to try to finished my Relationship Hacks book in 2019. I've been sitting on it too long. I'm going to try to use blog posts to spur myself into action.

A number of people asked me what projects, what code, what open source I did over the long holiday. ZERO. I did squat. I played video games, in fact. A bunch of them. I felt a little guilty then I got over it.

The Fun of Finishing - Exploring old games with Xbox Backwards Compatibility

I'm not a big gamer but I like a good story. I do single player with a plot. I consider a well-written video game to be up there with a good book or a great movie. I like a narrative and a beginning and end. Since it was the holidays, it did require some thought to play games.

When you're in a mixed relationship (a geek/techie and a non-techie) you need to be respectful of your partner's expectations. The idea of burning 4-6 hours playing games likely doesn't match up with your partner's idea of a good time. That's where communication comes in. We've found this simple system useful. It's non-gendered and should work for all types of relationships.

My spouse and I sat down at the beginning of our holiday vacation and asked each other "What do you hope to get out of this time?" Setting expectations up front avoids quiet resentment building later. She had a list of to-dos and projects, I wanted to veg.

Sitting around all day (staycation) is valid, as is using the time to take care of business (TCB). We set expectations up front to avoid conflict. We effectively scheduled my veg time so it was planned and accepted and it was *ok and guilt-free*

We've all seen the trope of the gamer hyper-focused on their video game while the resentful partner looks on. My spouse and I want to avoid that - so we do. If she knows I want to immerse myself in a game, a simple heads up goes a LONG way. We sit together, she reads, I play.

It's important to not sneak these times up on your partner. "I was planning on playing all night" can butt up against "I was hoping we'd spend time together." Boom, conflict and quiet resentment can start. Instead, a modicum of planning. A simple headsup and balance helps.

I ended up playing about 2-3 days a week, from around 8-9pm to 2am (so a REAL significant amount of time) while we hung out on the other 4-5 days. My time was after the kids were down. My wife was happy to see me get to play (and finish!) games I'd had for years.

Also, the recognition from my spouse that while she doesn't personal value my gaming time - she values that *I* value it. Avoid belittling or diminishing your partner's hobby. If you do, you'll find yourself pushing (or being pushed) away.

One day perhaps I'll get her hooked on a great game and one day I'll enjoy a Hallmark movie. Or not. ;) But for now, we enjoy knowing and respecting that we each enjoy (and sometimes share) our hobbies. End of thread.

If you enjoy my wife's thinking, check her out on my podcast The Return of Mo. My wife and I also did a full podcast with audio over our Cancer Year  Mo and Scott share their thoughts and struggle in this cancer diary they started the day after Mo was diagnosed.

Hope you find this helpful.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Using Visual Studio Code to program Circuit Python with an AdaFruit NeoTrellis M4

Dec 26, 2018

Description:

My son and I were working on an Adafruit NeoTrellis M4 Mainboard over the holidays. This amazing little device puts a NeoPixel + an Audio board + a USB port along with a 120 MHz Cortex M4 Core and a mic amplifier and you can program it with CircuitPython. CircuitPython is open source and on Github at https://github.com/adafruit/circuitpython. "CircuitPython is an education friendly open source derivative of MicroPython." It works with a bunch of boards including this NeoTrellis and it's just lovely for teaching and learning.

This item is just the mainboard! You'll almost certainly want two Silicone Elastomer 4x4 Pads and an enclosure to go along.

Circuit PythonAs with a lot of these small boards, when you plug a NeoTrellis into a your machine via USB you'll get new disk drive that pops up. All you have to do to "deploy" your code is copy it to your drive. Even better, why not just edit the code place?

There's a great Python editor called Mu that works well with Circuit Python. However, my son and I are more familiar with Visual Studio Code so we wanted to see how it worked with Circuit Python.

We installed the Python extension for VS Code as well as the Arduino extension for VS Code and the Arduino IDE directly from the Windows Store.

Fire up VS Code and File | Open Folder and open the Disk Drive of the NeoTrellis and open (or create) a code.py file. Then from the Command Palette (Ctrl-Shift-P) in VS Code select Arduino > Initialize. If you get an error you may need to set up the path to your Arduino IDE. If you installed it from the Windows Store like we did you may find it in a weird path. We set the arduino.path like this:

"arduino.path": "C:\\Program Files\\WindowsApps\\ArduinoLLC.ArduinoIDE_1.8.19.0_x86__mdqgnx93n4wtt"

The NeoTrellis M4 also shows up as a COM port so you can look at its Serial Output for debugging purposes as if it were an Arduino (because it is underneath). You then Arduino > Select a COM Port from the Command Palette and it will create a file called .vscode/arduino.json in your folder that will look like this:

{
"port": "COM3"
}

Trellis M4 is awesomeNow, within Visual Studio Code select Arduino > Open Serial Monitor and all of your print("") methods will output to that bottom pane.

Of course, we could putty into the COM Port but since I'm using this as a learning tool with my 11 year old, I find that a single window that shows both the console and the code help them focus, rather than managing multiple windows.

At this point we have a nice Developer Inner Loop going. That inner loop for us (the developers) is that we can write some code, hit save (Ctrl-S) and get immediate feedback. The application restarts when it detects the code.py file has changed and any debug (print) statements appear in the console immediately.

Visual Studio Code doing some Circuit Python

We are really enjoying this Adafruit NeoTrellis M4 Express kit. Next we're going to make a beat sequencer since the Christmas Soundboard was such a hit with mom!

Sponsor: Me! Hey friends, I've got a podcast I'm very proud of where I interview amazing people every week. Check it out at https://www.hanselminutes.com and please not only subscribe in your favorite podcasting app, but also tell your friends! Tweet about it and review it on iTunes/Google Play. Thanks!


© 2018 Scott Hanselman. All rights reserved.
     

The Fun of Finishing - Exploring old games with Xbox Backwards Compatibility

Dec 21, 2018

Description:

Star Wars: KOTORI'm on vacation for the holidays and I'm finally getting some time to play video games. I've got an Xbox One X that is my primary machine, and I also have a Nintendo Switch that is a constant source of joy. I recently also picked up a very used original PS4 just to play Spider-man but expanded to a few other games as well.

One of the reasons I end up using my Xbox more than any of my other consoles is its support for Backwards Compatibility. Backwards Compat is so extraordinary that I did an entire episode of my podcast on the topic with one of the creators.

The general idea is that an Xbox should be able to play Xbox games. Let's take that even further - Today's Xbox should be able to play today's Xbox games AND yesterday's...all the way back to the beginning. One more step further, shall well? Today's Xbox should be able to play all Xbox games from every console generation and they'll look better than you imagined them!

The Xbox One X can take 720p games and upscale them to 4k, use higher quality textures, and some games like Final Fantasy XIII have even been fully remastered but you still use the original disc! I would challenge you to play the original Red Dead Redemption on an Xbox One X and not think it was a current generation game. I recently popped in a copy of Splinter Cell: Conviction and it automatically loaded a 5-year-old save game from the cloud and I was on my way. I played Star Wars: KOTOR - an original Xbox game - and it looks amazing.

Red Dead Redemption

A little vacation combined with a lot of backwards compatibility has me actually FINISHING games again. I've picked up a ton of games this week and finally had that joy of finishing them. Each game I started up that had a save game found me picking up 60% to 80% into the game. Maybe I got stuck, perhaps I didn't have enough time. Who knows? But I finished. Most of these finishings were just 3 to 5 hours of pushing from my current (old, original) save games.

Crysis 2 - An Xbox 360 game that now works on an Xbox One X. I was halfway through and finished it up in a few days. Crysis 3 - Of course I had to go to the local retro game trader and pick up a copy for $5 and bang through it. Crysis is a great trilogy. Dishonored - I found a copy in my garage while cleaning. Turns out I had a save game in the Xbox cloud since 2013. I started right from where I left off. It's so funny to see a December 2018 save game next to a 2013 save game. Alan Wake - Kind of a Twin Peaks type story, or a Stephen King with a flashlight and a gun. Gorgeous game, and very innovative for the time. Mirror's Edge - Deceptively simple graphics that look perfect on 4k. This isn't just upsampling, to be clear. It's magic. Metro 2033 - Deep story and a lot of world building. Oddly I finished Metro: Last Light a few months back but never did the original. Sunset Overdrive - It's so much better than Jet Set Radio Future. This game has a ton of personality and they recorded ALL the lines twice with a male and female voice. I spoke to the voiceover artist for the female character on Twitter and I really think her performance is extraordinary. I had so much fun with this game that now the 11 year old is starting it up. An under-respected classic. Gears of War Ultimate - This is actually the complete Gears series. I was over halfway through all of these but never finished. Gears are those games where you play for a while and end up pausing and googling "how many chapters in gears of war." They are long games. I ended up finishing on the easiest difficulty. I want a story and I want some fun but I'm not interested in punishment. Shadow Complex - Also surprisingly long, I apparently (per my save game) gave up with just an hour to go. I guess I didn't realize how close I was to the end?

I'm having a blast (while the spouse and kids sleep, in some cases) finishing up these games. I realize I'm not actually accomplishing anything but the psychic weight of the unfinished is being lifted in some cases. I don't play a lot of multiplayer games as I enjoy a story. I read a ton of books and watch a lot of movies, so I look for a tale when I'm playing video games. They are interactive books and movies for me with a complete story arc. I love it when the credits role. A great single player game with a built-up universe is as satisfying (or more so) as finishing a good book.

What are you playing this holiday season? What have you rediscovered due to Backwards Compatibility?

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Enjoy some DOS Games this Christmas with DOSBox

Dec 19, 2018

Description:

I blogged about DOSBox five years ago! Apparently I get nostalgic around this time of year when I've got some downtime. Here's what I had to say:

I was over at my parents' house for the Christmas Holiday and my mom pulled out a bunch of old discs and software from 20+ years ago. One gaame was "Star Trek: Judgment Rites" from 1995. I had the CD-ROM Collector's edition with all the audio from the original actors, not just the floppy version with subtitles. It's a MASSIVE 23 megabytes of content!

DOSBox has ben providing joy in its reliable service for over 16 years and you should go check it out RIGHT NOW, if only to remind yourself of how good we have it now. DOSBox is an x86 and DOS Emulator - not a virtual machine. It emulates classic hardware like Sound Blaster cards and older graphics standards like VGA/VESA.

If a game runs too fast, you can slow it down by pressing Ctrl-F11. You can speed up games by pressing Ctrl-F12. DOSBox’s CPU speed is displayed in its title bar. Type "intro special" for a full hotkey list.

Note that DOSBox will start up TINY if you have a 4k monitor. There's a few things to you can do about it. First, ALT-ENTER will toggle DOSBox into full screen mode, although when you return to Windows your windows may find themselves resized.

For Windowed mode, I used these settings. You can't scale the window when output=surface, so experiment with settings like these:

windowresolution=1280 x 1024 output=ddraw

These are only the most basic initial changes you'll want to make. There's an enthusiastic community of DOSBox users that are dedicated to making it as perfect as possible. I enjoy this reddit thread debating "pixel perfect" settings. There's also a number of forks and custom builds of DOSBox out there that impose specific settings so be sure to explore and pick the one that makes you happy. It's also important to understand that aspect ratios and the size and squareness of a pixel will all change how your game looks.

I tend to agree with them that I don't want a blurry scaler. I want the dots/pixels as they are, simply made larger (2x, 3x, 4x, etc) with crisp edges at a reasonable aspect ratio. An interesting change you can make to your .conf file is the "forced" keyword after your scaler choice.

Here is scaler=normal3x (no forced)

Blurry DOSBox

and there's scaler-normal3x forced

The instructions say that forced means "the scaler will be used even if the result might not be desired." In this case, it forces the use of the scaler in text mode. Your mileage may vary, but the point is there's options and it's great fun. You may want scanlines or you may want crisp pixels.

I've found it all depends on what your memory of DOS is and what you're trying to do is to change the settings to best visualize that memory. My (broken) memory is of CRISP pixels.

Crisp DOSBox

Amazing difference!
The first thing you should do is add lines like these to the bottom of your dosbox.conf. You'll want your virtual C: drive mounted every time DOSBox starts up!

[autoexec] # Lines in this section will be run at startup. MOUNT C: C:\Users\scott\Dropbox\DosBox

If you want to play classic games but don't want the hassle (or questionable legality) of other ways, I'd encourage you to spend some serious time at https://www.gog.com. They've packaged up a ton of classic games so they "just work."

Bard's Tale 3 Space Quest 3

Enjoy! And THANK YOU to the folks that work on DOSBox for their hard work. It shows and we appreciate it.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Useful ASP.NET Core 2.2 Features

Dec 14, 2018

Description:

Earlier this week I talked about how I upgraded my podcast site to ASP.NET Core 2.2 and added Health Check features fairly easily. There's a ton of new features and so far it's been great running on my site with no issues. Upgrading from 2.1 is straightforward.

Better integration with popular Open API (Swagger) libraries including design-time checks with code analyzers Introduction of Endpoint Routing with up to 20% improved routing performance in MVC Improved URL generation with the LinkGenerator class & support for route Parameter Transformers (and a post from Scott Hanselman) New Health Checks API for application health monitoring Up to 400% improved throughput on IIS due to in-process hosting support Up to 15% improved MVC model validation performance Problem Details (RFC 7807) support in MVC for detailed API error results Preview of HTTP/2 server support in ASP.NET Core Template updates for Bootstrap 4 and Angular 6 Java client for ASP.NET Core SignalR Up to 60% improved HTTP Client performance on Linux and 20% on Windows

I wanted to look at just a few of these that I found particularly interesting.

You can get a very significant performance boost by moving ASP.NET Core in process with IIS.

Using in-process hosting, an ASP.NET Core app runs in the same process as its IIS worker process. This removes the performance penalty of proxying requests over the loopback adapter when using the out-of-process hosting model.

After the IIS HTTP Server processes the request, the request is pushed into the ASP.NET Core middleware pipeline. The middleware pipeline handles the request and passes it on as an HttpContext instance to the app's logic. The app's response is passed back to IIS, which pushes it back out to the client that initiated the request.

HTTP Client performance improvements are quite significant as well.

Some significant performance improvements have been made to SocketsHttpHandler by improving the connection pool locking contention. For applications making many outgoing HTTP requests, such as some Microservices architectures, throughput should be significantly improved. Our internal benchmarks show that under load HttpClient throughput has improved by 60% on Linux and 20% on Windows. At the same time the 90th percentile latency was cut down by two on Linux. See Github #32568 for the actual code change that made this improvement.

HTTP/2 is enabled by default. HTTP/2 may be sneaking up on you as for the most part "it just works." In ASP.NET Core's Kestral web server HTTP/2 is enabled by default over HTTPS. You can see here at both the command line and in Chrome I'm using HTTP/2 locally.

HTTP/2 locally

Here's Chrome. Note the "h2."

HTTP/2 in Chrome

Note that you'll only be able to get HTTP/2 when ALPN (Application-Layer Protocol Negotiation) is available. That means ALPN is supported on:

.NET Core on Windows 8.1/Windows Server 2012 R2 or higher .NET Core on Linux with OpenSSL 1.0.2 or higher (e.g., Ubuntu 16.04)

All in all, it's a solid release. Go check out the announcement post on ASP.NET Core 2.2 for even more detail!

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

How to set up ASP.NET Core 2.2 Health Checks with BeatPulse's AspNetCore.Diagnostics.HealthChecks

Dec 12, 2018

Description:

Availability TestsASP.NET Core 2.2 is out and released and upgrading my podcast site was very easy. Once I had it updated I wanted to take advantage of some of the new features.

For example, I have used a number of "health check" services like elmah.io, pingdom.com, or Azure's Availability Tests. I have tests that ping my website from all over the world and alert me if the site is down or unavailable.

I've wanted to make my Health Endpoint Monitoring more formal. You likely have a service that does an occasional GET request to a page and looks at the HTML, or maybe just looks for an HTTP 200 Response. For the longest time most site availability tests are just basic pings. Recently folks have been formalizing their health checks.

You can make these tests more robust by actually having the health check endpoint check deeper and then return something meaningful. That could be as simple as "Healthy" or "Unhealthy" or it could be a whole JSON payload that tells you what's working and what's not. It's up to you!

image

Is your database up? Maybe it's up but in read-only mode? Are your dependent services up? If one is down, can you recover? For example, I use some 3rd party back-end services that might be down. If one is down I could used cached data but my site is less than "Healthy," and I'd like to know. Is my disk full? Is my CPU hot? You get the idea.

You also need to distinguish between a "liveness" test and a "readiness" test. Liveness failures mean the site is down, dead, and needs fixing. Readiness tests mean it's there but perhaps isn't ready to serve traffic. Waking up, or busy, for example.

If you just want your app to report it's liveness, just use the most basic ASP.NET Core 2.2 health check in your Startup.cs. It'll take you minutes to setup.

// Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddHealthChecks(); // Registers health check services
}

public void Configure(IApplicationBuilder app)
{
app.UseHealthChecks("/healthcheck");
}

Now you can add a content check in your Azure or Pingdom, or tell Docker or Kubenetes if you're alive or not. Docker has a HEALTHCHECK directive for example:

# Dockerfile
...
HEALTHCHECK CMD curl --fail http://localhost:5000/healthcheck || exit

If you're using Kubernetes you could hook up the Healthcheck to a K8s "readinessProbe" to help it make decisions about your app at scale.

Now, since determining "health" is up to you, you can go as deep as you'd like! The BeatPulse open source project has integrated with the ASP.NET Core Health Check API and set up a repository at https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks that you should absolutely check out!

Using these add on methods you can check the health of everything - SQL Server, PostgreSQL, Redis, ElasticSearch, any URI, and on and on. Just add the package you need and then add the extension you want.

You don't usually want your health checks to be heavy but as I said, you could take the results of the "HealthReport" list and dump it out as JSON. If this is too much code going on (anonymous types, all on one line, etc) then just break it up. Hat tip to Dejan.

app.UseHealthChecks("/hc",
new HealthCheckOptions {
ResponseWriter = async (context, report) =>
{
var result = JsonConvert.SerializeObject(
new {
status = report.Status.ToString(),
errors = report.Entries.Select(e => new { key = e.Key, value = Enum.GetName(typeof(HealthStatus), e.Value.Status) })
});
context.Response.ContentType = MediaTypeNames.Application.Json;
await context.Response.WriteAsync(result);
}
});

At this point my endpoint doesn't just say "Healthy," it looks like this nice JSON response.

{
status: "Healthy",
errors: [ ]
}

I could add a Url check for my back end API. If it's down (or in this case, unauthorized) I'll get this a nice explanation. I can decide if this means my site is unhealthy or degraded.  I'm also pushing the results into Application Insights which I can then query on and make charts against.

services.AddHealthChecks()
.AddApplicationInsightsPublisher()
.AddUrlGroup(new Uri("https://api.simplecast.com/v1/podcasts.json"),"Simplecast API",HealthStatus.Degraded)
.AddUrlGroup(new Uri("https://rss.simplecast.com/podcasts/4669/rss"), "Simplecast RSS", HealthStatus.Degraded);

Here is the response, cool, eh?

{
status: "Degraded",
errors: [
{
key: "Simplecast API",
value: "Degraded"
},
{
key: "Simplecast RSS",
value: "Healthy"
}
]
}

This JSON is custom, but perhaps I could use the a built in writer for a free reasonable default and then hook up a free default UI?

app.UseHealthChecks("/hc", new HealthCheckOptions()
{
Predicate = _ => true,
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});

app.UseHealthChecksUI(setup => { setup.ApiPath = "/hc"; setup.UiPath = "/healthcheckui";);

Then I can hit /healthcheckui and it'll call the API endpoint and I get a nice little bootstrappy client-side front end for my health check. A mini dashboard if you will. I'll be using Application Insights and the API endpoint but it's nice to know this is also an option!

If I had a database I could check one or more of those for health well. The possibilities are endless and up to you.

public void ConfigureServices(IServiceCollection services)
{
services.AddHealthChecks()
.AddSqlServer(
connectionString: Configuration["Data:ConnectionStrings:Sql"],
healthQuery: "SELECT 1;",
name: "sql",
failureStatus: HealthStatus.Degraded,
tags: new string[] { "db", "sql", "sqlserver" });
}

It's super flexible. You can even set up ASP.NET Core Health Checks to have a webhook that sends a Slack or Teams message that lets the team know the health of the site.

Check it out. It'll take less than an hour or so to set up the basics of ASP.NET Core 2.2 Health Checks.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

How to remove words from the Windows Autocorrect Spell Check Dictionary

Dec 7, 2018

Description:

Well crap. I was typing really fast and got a squiggly, so I right-clicked on it and rather than selecting the correct word from the autocorrect dictionary, I clicked Add To Dictionary.

I added the MISSPELLED WORD to the Dictionary! Now Windows is suggesting that I spell this word (and others) wrong in all apps.

At this point I also realized that I had no idea how to REMOVE a word from the Windows Spell Check Dictionary. However, I do know that Windows isn't a black box so there must be a dictionary somewhere. It's gotta be a file or a registry key or something, right?

It's even easier than I thought it would be. The Windows 10 custom dictionaries are at %AppData%\Microsoft\Spelling\

The Windows 10 custom dictionaries are at %AppData%\Microsoft\Spelling\

I just opened the default.dic file in Notepad and removed the misspelled word.

Opening default.dic in Notepad

Whew. I can't tell you how many wrong words have found there way in there over the years. Hope this helps you in some small way.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.
© 2018 Scott Hanselman. All rights reserved.
     

Announcing WPF, WinForms, and WinUI are going Open Source

Dec 4, 2018

Description:

Buckle up friends! Microsoft is open sourcing WPF, Windows Forms (winforms), and WinUI, so the three major Windows UX technologies are going open source! All this is happening on the same day as .NET Core 3.0 Preview 1 is announced. Madness! ;)

.NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). Note that .NET Core 3 continues to be open source and runs on Windows, Linux, Mac, in containers, and in the cloud. In the case of WPF/WinForms/etc you'll be able to create apps for Windows that include (if you like) their own copy of .NET Core for a clean side-by-side install and even faster apps at run time. The Windows UI XAML Library (WinUI) is also being open sourced AND you can use these controls in any Windows UI framework.

That means your (or my!) WPF/WinForms/WinUI apps can all use the same controls if you like, using XAML Islands. I could take the now 10 year old BabySmash WPF app and add support for pens, improved touch, or whatever makes me happy!

WPF and Windows Forms projects are run under the .NET Foundation which also announced changes today and the community will guide foundation operations. The .NET Foundation is also changing its governance model by increasing the number of board members to 7, with just 1 appointed by Microsoft. The other board members will be voted on by the community! Anyone who has contributed to a .NET Foundation project can run, similar to how the Gnome Foundation works! Learn more about the .NET Foundation here.

On the runtime and versioning side, here's a really important point from the .NET blog that's worth emphasizing IMHO:

Know that if you have existing .NET Framework apps that there is not pressure to port them to .NET Core. We will be adding features to .NET Framework 4.8 to support new desktop scenarios. While we do recommend that new desktop apps should consider targeting .NET Core, the .NET Framework will keep the high compatibility bar and will provide support for your apps for a very long time to come.

I think of it this way. If you’ve got an existing app that you’re happy with, there is no reason to port this to .NET Core. Microsoft will support the .NET Framework for a very long time, given that it’s a part of Windows. But post .NET Framework 4.8. new features will usually only become available in .NET Core because Microsoft is drastically reducing the risk and thus rate of change for .NET Framework. So if you’re building a new app or you’re actively evolving an existing app you should really start looking at .NET Core. Porting to .NET Core certainly isn’t free, but it offers many benefits, such as better performance, XCOPY deployment for the framework itself, and feature set that is growing fast, thanks to open source. Choose the strategy that makes sense for your project and/or business.

I don't want to hear any of this "this is dead, only use that" nonsense. We just open sourced WinForms and have already taken Pull Requests. WinForms has been updated for 4k+ displays! WPF is open source, y'all! Think about the .NET Standard and how you can run standard libraries on .NET Framework, .NET Core, and Mono - or any ".NET" that's out there. Mono is enabling running .NET Standard libraries via WebAssembly. To be clear - your browser is now .NET Standard capable! There are open source projects like https://platform.uno/ and Avalonia and Ooui taking .NET in new and interesting places. Blazor makes Web UIs in .NET with (preview/experimental) client support with Web Assembly and server support included in .NET 3.0 with Razor Components. Only good things are coming, my friends!

.NET ALL THE THINGS

.NET Core runs on Raspberry Pi and ARM processors! .NET Core supports serial ports, IoT devices, and there's even a System.Device.GPIO (General Purpose I/O) package! Go explore https://github.com/dotnet/iot to really get your head around how much cool stuff is happening in the .NET space.

I want to encourage you to go check out Matt Warren's extremely well-researched post "Open Source .NET - 4 years later" to get a real visceral sense of how far we've come as a community. You'll be amazed!

Now, go play!

Download .NET Core 3 Preview 1 on Windows, Mac and Linux. You can see details of the release in the .NET Core 3 Preview 1 release notes Visual Studio 2019 will support building .NET Core 3 applications and the VS2019 preview can be installed side by side with existing versions of VS.

Enjoy.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

On Developer Advocacy

Nov 30, 2018

Description:

TeamworkNaming things is hard. I've talked before about the term "evangelism" and my dislike for it. Evangelism, Advocacy, Developer Relations, PR, Marketing, and on and on. More and more I'm just trying to educate and maybe entertain a little. So I like Edutainment, myself, hat tip to KRS-One.

I'm getting on a plane tomorrow to go to the Microsoft Azure + AI Conference @DevIntersection and the Free Microsoft Connect 2018 Event (you can watch online all day!) and as I was packing I was struck with a few thoughts I wanted to share here.

What a privilege it is to speak about products that so many people have worked on and (hopefully) so many people will enjoy. Especially ones as large as Azure or Visual Studio - thousands of people work so hard! Engineers, Program Managers, Testers, Community Members...people from all over working on each release so a select few of us get on stage to share it with you! And who am I to have this privilege?

Don't think for a second that when you're giving a technical talk that it's about you. You're sitting on a stack of software you had a small part in writing and standing on the shoulders of giants of generations of engineers and creators who came before you. When I do talks where I'm representing a huge group I reflect on this with gratitude.

If you work on any of the products I'm showing, know this; I may be one of the talking heads or a visible grand marshal but we work for you and we never forget it. My job at events like this is to make the product - your work - shine. I take that job very seriously, and if it looks like it's effortless, that's because of the massive amount of work we put into the presentation. Hours of practice, story arcs, literally blocking movement as if it were a play or stage show, camera work, and transitions. Deeping understanding what we're presenting and why it's awesome and why you're proud of it.

I'm writing this note for all the other advocates and visible community members.

What a joy and privilege it is to stand up and represent our co-workers and follow engineers and to tell the stories of the things they build!

Let that privilege both put motivation in you and propel you forward to present their work - your teams' work.

I appreciate you all, both inside and out, and I'll will do my best to represent your team and the larger community to the best of my ability.

Sponsor: Looking for a new challenge? Hired is the leading job marketplace that connects engineers to their next challenge. Let Hired connect you to your next challenge. Sign up now.


© 2018 Scott Hanselman. All rights reserved.
     

The 2018 Christmas List of Best STEM Toys for Kids

Nov 28, 2018

Description:

Hey friends! This is my FIFTH year doing a list of Great STEM Christmas Toys for Kids! Can you believe it? In case you missed them, here's the previous years' lists! Be aware I use Amazon referral links so I get a little kickback (and you support this blog!) when you use these links. I'll be using the pocket money to...wait for it...buy STEM toys for kids! So thanks in advance!

The 2017 Christmas List of Best STEM Toys for kids The 2016 Christmas List of Best STEM Toys for your little nerds and nerdettes The 2015 Christmas List of Best STEM Toys 2014: Getting Started with Robots for kids and children in STEM this holiday season

OK, let's do it!

littleBits

I've always liked littleBits but when they first came out I thought they were expensive and didn't include enough stuff. Fast forward and littleBits have dropped in price and built a whole ecosystem of littleBits that work together. This year the most fun is the littleBits Marvel Avengers Inventor Kit. At the time of this writing, this kit is 33% off at Amazon. You can built your own Iron Man (or Ironheart!) gauntlet and load it up with littleBits that can do whatever you'd like. One particularly cool thing included is an LED Matrix that you can address directly by writing code with the iOS or Android app.

littleBits Marvel Avengers Inventor Kit

Kano - Computer Kit and Wand

Both my kids love the Kano Computer Kit, now updated for 2018. It's a complete Raspberry Pi 3 kit that includes the keyboard, mouse, case, LED lights, and everything you'd need to build a Pi. This year they've branched out to the Kano Happy Potter Coding Kit that you can use to build a wand and learn to code. The "wand" is a custom PCB with codeable LEDs, buttons, and batteries that the kids put inside a wand. The wand is Bluetooth and includes lots of tech like an accelerometer, gyroscope, magnetometer, and a vibrating rumble pack. All of this tech is controllable with laptops or smart devices and code with JavaScript.

Harry Potter Kano Coding Kit and Wand

UbTech JIMU Robot - Unicornbot Kit

UbTech has a whole series of Technics-style Robot kits. There's the usual tanks and cars, but there's also some more creative and "out there" ones like this 400-piece Unicorn Robot. It includes color sensors, server motors, a DC motor, and a light up horn. It's also codeable/controllable via an iOS or Android app. Very cool!

I'd really like their Lynx Alexa controllable walking robot but it's way out of my price range. Still fun to check out though!

Unicornbot

Erector by Meccano Kits

We've found these Erector by Meccano Kits to be inexpensive and well-built. The 25-in-1 kit is great and includes a container and over 600 pieces. I like these metal kits because they feel like the ones I had in my childhood. Kids learn how to use motors, pulleys, and other explore functional motion.

Erector Set

Osmo Genius Kit for iPad

The Osmo Genius is quite clever and based on one deceptively simple idea - what if the iPad camera faced downward and could see the table in front of the child? It came with a base and a reflector that directs the front-facing camera downwards. Then the educational games are written to see what's happening on the table and provide near-instant feedback. You can start with the base kit and later optionally add kits and games.

Osmo Genius Kit for iPad

Elenco 130-in-1 Electronic Playground and Learning Center

I like classic toys and while toys with bluetooth and fancy features are cool, I want to balance it out with the classics that let you explore the physical world. These also tend to be more affordable as well.

I really like this classic electronic trainer with 130 experiments like an AM broadcast station, Electronic Organ, LED strobe light, Timer, Logic Circuits and much, much more. The 50-in-One version is just $16! Frankly all the Elenco products are fantastic.

image

Piper Computer Kit (2018 Edition)

I had this on the list last year but my kids still love it. We have the 2016 kit and it's been updated for 2018.

The Piper is a little spendy at first glance, but it's EXTREMELY complete and very thoughtfully created. Sure, you can just get a Raspberry Pi and hack on it - but the Piper is not just a Pi. It's a complete kit where your little one builds their own wooden "laptop" box (more of a luggable), and then starting with just a single button, builds up the computer. The Minecraft content isn't just vanilla Microsoft. It's custom episodic content! Custom voice overs, episodes, and challenges.

What's genius about Piper, though, is how the software world interacts with the hardware. For example, at one point you're looking for treasure on a Minecraft beach. The Piper suggests you need a treasure detector, so you learn about wiring and LEDs and wire up a treasure detector LED while it's running. Then you run your Minecraft person around while the LED blinks faster to detect treasure. It's absolute genius. Definitely a favorite in our house for the 8-12 year old set.

Piper Raspberry Pi Kit

I hope you have a great holiday season!

FYI: These Amazon links are referral links. When you use them I get a tiny percentage. It adds up to taco money for me and the kids! I appreciate you - and you appreciate me-  when you use these links to buy stuff.

Sponsor: Let top companies apply to you. Create a free profile on Hired and unlock the ability to let companies apply to you, not the other way around. Create a free profile.


© 2018 Scott Hanselman. All rights reserved.
     

Upgrading the DakBoard Family Calendar with Raspberry Pi Zero W and Read Only filesystem

Nov 23, 2018

Description:

Raspberry Pi Zeros are SMALLEarlier this week I built a Family Calendar using a used flat screen monitor and a Raspberry Pi 3 I had lying around and documented it in my post How to build a Wall Mounted Family Calendar and Dashboard with a Raspberry Pi and cheap monitor.

Eric Brown added two great comments (the comments on my blog are always better than the content!) He said:

You can save power & money by using an Pi Zero W instead. This is likely overkill, but I took the time to get the Pi Zero to mount the SD card read-only and do all the writes to a RAM disk.

Eric said "RPis are surprisingly sensitive to power glitches, and will often corrupt the SD card" and that "after mounting the SD read-only, my DakBoard has been running stably for months; before doing that, it corrupted the SD card within 6 weeks."

While I haven't had any issues with my Raspberry Pis, this seemed like a fun "version 2" of the calendar to make with the kids. Worst case scenario? Now I have LCD family calendars!

You'll recall I commented about how important the Spouse Acceptance Factor is whenever introducing new technology into the house.

It has to just work. If my Spouse doesn't like the idea or find its not reliable, the SAF (Spouse Acceptance Factor) will be low and they'll want to get rid of it. All it takes is one "why isn't this working" and I'm dead in the water.

I checked Amazon and found a number of Raspberry Pi Zero W (W is for Wireless, important!) Kits for around US$20. You can see in the picture above how SMALL a Raspberry Pi Zero W is (with LEGO Miss Marvel for scale).

Get the HDMI cables as flush an sanitary as possible

If you have the cables, power supplies, and don't need the headers and extra stuff, I've seen them as low as $10. It's very important to note that a Raspberry Pi Zero W does support HDMI but it has a MINI-HDMI female connector. You'll need a mini-HDMI to HDMI adapter or a mini-HDMI to HDMI short cable.

Here's another aside. Did you know there are a LOT of different HDMI connector orientations? Sure, you could just loop a big old 6 foot HDMI cable back there, but where's the fun in that? There are micro HDMI D1,D2,D3 that describe 90 degree and 270 degree rotations of the male. If you want to be really flush, consider a cable (for example like a C2 to A2) that is usually used in drones. This would allow you to mount the Pi Zero W flush against the back of the monitor - or even better, inside the monitor or a wooden picture frame!

Dakboard

Get the Raspberry Pi Zero W on your wireless and avoid the trouble of keyboards and mice!

Pi Zero Ws are so small that they don't have a regular USB connector. There is one for power and one that is "USB OTG." If you want to connect a mouse and keyboard directly to the Zero you'll need this USB OTG Micro to Type A Cable and/or a powered USB hub.

OR!

Save money and prep your Raspberry Pi Micro SD Card with SSH turned on by default and your Wireless Network enabled by default! Then you can set it up remotely as a DakBoard/MagicMirror Family Calendar.

Download the Image for Raspbian Stretch. You'll want the desktop version (not Lite) because this IS a visual project, not a headless one! I recommend Etcher for burning images to SD Cards. It's free. Raspberry Pi Zero W and a 1A+ micro USB power supply Cheap micro SD Card. They should include an adapter to plug it into your main computer to prepare. Create an empty file called "ssh" on the prepared Micro SD Card before you put the card in the Raspberry Pi Make a file called wpa_supplicant.conf with Linux line feeds (LF, not the default Windows CR/LF) with content like this (and your own country code)country=us
update_config=1
ctrl_interface=/var/run/wpa_supplicant

network={
scan_ssid=1
ssid="YourNetworkSSID"
psk="NETWORKPASSWORD!"
}

This will cause the Pi to get on the network on boot up which should allow you to SSH over to it directly, thereby avoiding any trouble with keyboards and mice and the Pi Zero W.

If you DO end up wanted to connect the keyboard and mouse, you'll want a keyboard/mouse setup that is all in one with just one USB adapter or you'll need a Powered USB Hub. This should be temporary as you get the Pi prepared.

Make the Raspberry Pi Zero W readonly - after it's been configured with DakBoard

Once I had the Pi Zero W all prepared I went around the net looking for tutorials to make it readonly. You're basically causing Linux to mount the SD Card readonly and then do all writes to a RAM Disk that will ultimately be tossed whenever you (rarely) reboot. Get it perfect before you go readonly as it's a small hassle to switch back. Or you can pull the card out and mount it on your other computer then return it. Still, not awesome.

Eric from the comments pointed me to a Raspberry Pi Jesse tutorial, but I tried it and it didn't work for me, likely because I'm on Raspbian Stretch, a newer version. There's a LOT of choices and ways to do this but the best tutorial I found was on the page for Domoticz, a n open source Home Automation system which looks, as an aside, awesome and something I need to check out in the future!

For now, I followed these instructions on Setting up overlayFS on Raspberry PI (the "overlay" being the file system you'll write to but it's a fake, the writes are going to one folder and the two foldkers (one read-write and one read-only are overlaid over each other). This allowed me to make a Raspberry Pi Raspbian Stretch system Readonly on my Pi Zero W.

I followed the instructions exactly, only skipping the parts like "Modify domoticz service" that didn't apply. When I run "mount" I can see the main file system is read-only and the others are overlaid and read-write.

pi@dakboard2:~ $ mount
/dev/mmcblk0p7 on / type ext4 (ro,noatime,data=ordered)
snip!
ramdisk on /var_rw type tmpfs (rw,relatime)
ramdisk on /home_rw type tmpfs (rw,relatime)
overlay on /home type overlay (rw,relatime,lowerdir=/home_org,upperdir=/home_rw/upper,workdir=/home_rw/work)
overlay on /var type overlay (rw,relatime,lowerdir=/var_org,upperdir=/var_rw/upper,workdir=/var_rw/work) So far so good! This will make a smaller and lower power Family Calendar that will hopefully be more reliable as well! Thanks Eric from the comments!

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.
© 2018 Scott Hanselman. All rights reserved.
     

How to build a Wall Mounted Family Calendar and Dashboard with a Raspberry Pi and cheap monitor

Nov 21, 2018

Description:

Glanceable DashboardI love dashboards. I love Raspberry Pis (tiny $35 computers the size of a set of playing cards). And I'm cheap frugal. I found a 24" old LCD at Goodwill (a local thrift shop) and bought it but it's been sitting unused in my garage.

Then I stumbled on DakBoard. The idea is simple - A wifi connected wall display for your photos, calendar, news, weather and to-do.

The implementation is simple genius. It's a browser that starts up full screen (kiosk mode) and just sits there and updates occasionally. DakBoard provides the private webpage and tools to make that happen. You can certainly build this yourself with any number of open source tools. I chose DakBoard because it was simple, beautiful, and I was able to get the whole thing done in less than an hour. I'm sure I'll spend many hours tweaking it through. There's also the very popular MagicMIrror platform, so lots of choice and power in this space!

What are some considerations?

You may want to turn it off on a scheduled to save power and the screen cronjob - turn it off on a schedule sensor - turn it on when something (your alarm, nest, thermostat motion detector attached to GPIO, etc) detects your presence) It has to act like an appliance. If you are messing with it to keep it alive, it's not an appliance, it's another computer to manage. It has to just work. If my Spouse doesn't like the idea or find its not reliable, the SAF (Spouse Acceptance Factor) will be low and they'll want to get rid of it. All it takes is one "why isn't this working" and I'm dead in the water. Finally - What do you want to show?

Someone asked me - "What would I want to put on my dashboard other than a calendar? I don't see why this is useful."

What would you put on a Glance-eable Display?

Family Calendar(s), movie times, temperature, news, my blood sugar, disk free on my NAS, TV schedule, family photos, commute traffic, album releases, homework due soon, family events, trips, flight status, music playing now, literally anything you want as a glance-able display. 

Glanceable Dashboard

Philosophy

You'll want to ask yourself, is this just an iPad on the wall? I'd propose not. In fact, I'd say this is a Wall Mounted Glanceable Display - a personal dashboard - not an interactive thing. I want the family and kids to just stop by, note important information and move on.

It's also worth pointing out the a horizontal monitor on the wall looks like, well, a monitor on the wall. But somehow when it's Portrait it's dramatic. It's not something we are (yet) used to seeing. I may try this out in a few ways, or even make a few of these displays!

How to Build a Raspberry Pi-based Family Calendar

It's pretty easy! I used the DakBoard Blog but I had most of the stuff already.

Get a $35 Raspberry Pi 3. The 3 is fast and includes Wifi so you don't need an extra adapter. I like a 2.5A powersupply but some folks say you can run the Raspberry Pi off the monitor's USB power - IF that power can put out at least 1A. 500mA will likely cause instability. It depends on if you want to try to get the whole thing down to one power cable. Cheap SD Card - 8 gigs is fine, but get whatever works for you. This doesn't need to be awesome. A 1 foot HDMI cable. You're gonna mount the Raspberry Pi to the back of the monitor and hide it so you want the cable to be as small as possible. You might need a 90 degree or 270 degree adapter to avoid HDMI cables from sticking out or a short cable with this built in. And finally - a 24" ish (smaller is fine) LCD (IPS is nice) monitor with smallish bezels and HDMI inputs that go out to the side (NOT directly out the back) as you want this flush on the wall. Think about how you'll mount it. You can take the back off the monitor and use hanging wire OR use a flush VESA mount.

Install Raspbian on the Raspberry Pi. I use Noobs to bootstrap my install as it's super fast and easy. Go through the standard setup. Make sure you've set up:

Wifi login Timezone Boot to Desktop automatically install chromium via "sudo apt-get install -y rpi-chromium-mods"

Then you make sure that Chromium starts up full screen, the mouse is hidden, and we're looking at the dashboard! It's super important you don't have to touch it. It's an appliance, right?

sudo nano ~/.config/lxsession/LXDE-pi/autostart @xset s off @xset -dpms @xset s noblank @chromium-browser --noerrdialogs --incognito --kiosk http://dakboard.com/app/?p=YOUR_PRIVATE_URL

Then you can set up a cronjob if you want to turn the Pi's screen on and off on a schedule. Using rpi-hdmi.sh you can make a crontab -e that looks like this:

# Turn HDMI Off (22:00/10:00pm) 0 22 * * * /home/pi/rpi-hdmi.sh off # Turn HDMI On (7:00/7:00am) 0 7 * * * /home/pi/rpi-hdmi.sh on

My family uses Google Calendar (GSuite) to manage hanselman.com, but I use Outlook at work. I also have a lot of business/work crap in my calendar that the family doesn't need to see. So I have two problems here, filtering, and appointment movement between Work and Home.

My wife and kids use Google Calendar and it's their authoritative source. My work calendar is MY authoritative source, so I want to sync Outlook->Google but ONLY including Personal/Podcasts/Travel categories. I categorize in Outlook at work, and then those appointments that are appropriate for the family calendar get moved over. Then the Family Calendar dashboard includes color coordinated items for Mom, Dad, Kid1, Kid2. The kids include homework that's due as appointments.

I use the Outlook Google Calendar Sync open source project to do this calendar movement for me. It does require Outlook and is a client solution so if you have a better idea let me know.

GOTCHA: I have been using Google Calendar for YEARS. I have also been using sync tools like this for years. As such, I was noticing that sometimes DakBoard would timeout asking for my Google Calendar's ICS file. It would take minutes. So I requested it myself and it was 26 megs. It's clear that Google calendar doesn't care deeply about iCal and that's disappointing. This could easily be solved if they'd support some kind of OData like URL-based query for fromdate=, todate=. In this case, the DakBoard was getting 26 megs over and over to just show a few weeks of appointments. I literally had appointments from 2005 in the calendar. I decided that since I'd declared Outlook my authoritative source for my calendar that I'd take an archive (one time snapshot) of my iCal and then delete all my calendar items from Google Calendar and re-sync, one way, from the authoritative source, going back 1 year. I'm likely a rare case but it's worth noting in case you bump into this.

All in all, this can easily be done in a short few hours if you have a Pi and a monitor. The time will be spent making it "sanitary." Making the cables perfect, hanging it on the wall, hiding the cables, then tweaking the screen to be perfect.

Editing screens on DakBoard

DakBoard has a free option that works great, or a Premium subscription that gives you even more control. Again, it depends on your web/art ability, and your patience. This is a fun new world that I'm excited to get involved with and my family is already stoked about this new display as we enter the holiday season.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Compiling C# to WASM with Mono and Blazor then Debugging .NET Source with Remote Debugging in Chrome DevTools

Nov 16, 2018

Description:

Blazor quietly marches on. In case you haven't heard (I've blogged about Blazor before) it's based on a deceptively simple idea - what if we could run .NET Standard code in the browsers? No, not Silverlight, Blazor requires no plugins and doesn't introduce new UI concepts. What if we took the AOT (Ahead of Time) compilation work pioneered by Mono and Xamarin that can compile C# to Web Assembly (WASM) and added a nice UI that embraced HTML and the DOM?

Sound bonkers to you? Are you a hater? Think this solution is dumb or not for you? To the left.

For those of you who want to be wacky and amazing, consider if you can do this and the command line:

$ cat hello.cs
class Hello {
static int Main(string[] args) {
System.Console.WriteLine("hello world!");
return 0;
}
}
$ mcs -nostdlib -noconfig -r:../../dist/lib/mscorlib.dll hello.cs -out:hello.exe
$ mono-wasm -i hello.exe -o output
$ ls output
hello.exe index.html index.js index.wasm mscorlib.dll

Then you could do this in the browser...look closely on the right side there.

You can see the Mono runtime compiled to WASM coming down. Note that Blazor IS NOT compiling your app into WASM. It's sending Mono (compiled as WASM) down to the client, then sending your .NET Standard application DLLs unchanged down to run within with the context of a client side runtime. All using Open Web tools. All Open Source.

Blazor uses Mono to run .NET in the browser

So Blazor allows you to make SPA (Single Page Apps) much like the Angular/Vue/React, etc apps out there today, except you're only writing C# and Razor(HTML).

Consider this basic example.

@page "/counter"

<h1>Counter</h1>
<p>Current count: @currentCount</p>
<button class="btn btn-primary" onclick="@IncrementCount">Click me</button>

@functions {
int currentCount = 0;
void IncrementCount() {
currentCount++;
}
}

You hit the button, it calls some C# that increments a variable. That variable is referenced higher up and automatically updated. This is trivial example. Check out the source for FlightFinder for a real Blazor application.

This is stupid, Scott. How do I debug this mess? I see you're using Chrome but seriously, you're compiling C# and running in the browser with Web Assembly (how prescient) but it's an undebuggable black box of a mess, right?

I say nay nay!

C:\Users\scott\Desktop\sweetsassymollassy> $Env:ASPNETCORE_ENVIRONMENT = "Development"
C:\Users\scott\Desktop\sweetsassymollassy> dotnet run --configuration Debug
Hosting environment: Development
Content root path: C:\Users\scott\Desktop\sweetsassymollassy
Now listening on: http://localhost:5000
Now listening on: https://localhost:5001
Application started. Press Ctrl+C to shut down.

Then Win+R and run this command (after shutting down all the Chrome instances)

%programfiles(x86)%\Google\Chrome\Application\chrome.exe --remote-debugging-port=9222 http://localhost:5000

Now with your Blazor app running, hit Shift+ALT+D (or Shift+SILLYMACKEY+D) and behold.

Feel free to click and zoom in. We're at a breakpoint in some C# within a Razor page...in Chrome DevTools.

HOLY CRAP IT IS DEBUGGING C# IN CHROME

What? How?

Blazor provides a debugging proxy that implements the Chrome DevTools Protocol and augments the protocol with .NET-specific information. When debugging keyboard shortcut is pressed, Blazor points the Chrome DevTools at the proxy. The proxy connects to the browser window you're seeking to debug (hence the need to enable remote debugging).

It's just getting started. It's limited, but it's awesome. Amazing work being done by lots of teams all coming together into a lovely new choice for the open source web.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Web Development and Advanced Techniques with Linux on Windows (WSL)

Nov 14, 2018

Description:

I've posted several times on the Windows Subsystem for Linux that allows you to run Linux on Windows 10 without a VM. Check out my YouTube on Editing code and files on Windows Subsystem for Linux on Windows 10. There's just one rule. You can mess with Windows files from Linux but you can't mess with Linux files from Windows. Otherwise, go crazy and enjoy. Here's some of my previous posts you should check out:

The year of Linux on the (Windows) Desktop - WSL Tips and Tricks Setting up a Shiny Development Environment within Linux on Windows 10 Installing Fish Shell on Ubuntu on Windows 10 Writing and debugging Linux C++ applications from Visual Studio using the "Windows Subsystem for Linux" Ubuntu now in the Windows Store: Updates to Linux on Windows 10 and Important Tips

WSL is pretty fantastic although its disk access is slower than native Linux, I find myself using it every day. If you want to setup Linux on your Windows 10 machine, just turn it on, then head over to the Windows Store and search for "Linux."

You can turn on Linux on Windows 10 by typing "Windows Features" and checking "Windows Subsystem for Linux." Then get a Linux from the Windows Store.

If you prefer to use PowerShell and do it in one line, just do this from an Admin PowerShell prompt:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Then go get any one (or more!) of these from the Store:

Ubuntu OpenSUSE SLES Kali Linux Debian GNU/Linux

When you're in a Windows shell like PowerShell or CMD you might want to run Linux and/or jump comfortably between shells. You can do that in a few ways. The best and recommended way is running "wsl.exe" as that will start up your default distro. You can also just type the name of the distro. So I can type "ubuntu" and get in there directly.

You can type "bash" but that's not recommended if you've changed shells. If you've set up zsh or fish and type bash, it's gonna still try to run bash.

Here I've typed wslconfig and you can see I've got both Ubuntu and Debian installed, with Ubuntu as the default when I type "wsl."

C:\Users\scott>wslconfig /list
Windows Subsystem for Linux Distributions:
Ubuntu-18.04 (Default)
Debian

Now that I know how to run wsl from anywhere I can even pipe stuff in and out it Linux from outside. For example here I am in cmd.exe but I'm calling commands in Linux, that come out, then back in, etc. You can mix and match however you'd like!

C:\dev>type hello.sh
echo Hello
C:\dev>wsl cat /mnt/c/dev/hello.sh | wsl fromdos | wsl /bin/sh
Hello

This means even when I'm in CMD or PowerShell I can use Linux commands that are convenient or familiar to me. For example, here I'm piping a Windows Update log file into a the Linux command sha1sum command. Note the use of - to accept standard input - even though that input is from Windows!

C:\Users\scott\Desktop>type WindowsUpdate.log | wsl sha1sum -
3b48adce8f6c9cb816e8845d824dacc0440ca1b8 -

Sweet. There's a number of nice advanced techniques if you want to make your WSL installations smarter AND automatically configured.  You can make a file in /etc/wsl.conf to affect your DNS, metadata and driving mounting.

When you are in a WSL shell, your Windows drive (your main drive) is at /mnt/c. So here is my Windows desktop as viewed from WSL:

screenfetch in WSL

I most of my dev work in /mnt/d/github for example. That way I can use VS Code from Windows but run Node/Ruby/Go/Whatever from WSL.

I keep my files on my Windows drive, edit them in VS Code, but run things in WSL. Again, never use Windows utilities to reach into and/or edit files on the WSL/Linux subsystem. Also, always been conscious of your CR/LF situation, and be real conscious if you're going to run git in both Windows and WSL.

Here's VS Code at the top, WSL/Ubuntu running Node at the bottom, and the local node app running in Edge on Windows on the lower right. We are sharing file systems and network port space:

Cross platform Web Dev

You can even share environment variables between WSL and Windows with a special environment variable called WSLENV. This is pretty advanced but super powerful. Read this carefully. You make a environment variable that is a list of names of other variables that you want translated between environments.

That means you can do something like this. I'm in WSL and I have an environment variable that points to a location on the filesystem. I need it to be correct in both worlds.

scott@IRONHEART:/mnt/d$ export MYLINUXPATH=/mnt/d/github/expresstest
scott@IRONHEART:/mnt/d$ export WSLENV=MYLINUXPATH/p
scott@IRONHEART:/mnt/d$ cmd.exe
D:\>echo %MYLINUXPATH%
D:\github\expresstest

Read that carefully. It's awesome and it's very configurable.

There's lots of users of WSL and many have assembled great lists of resources like Awesome-WSL by Hayden.

It's also worth pointing out that WSL is just now one console you can choose from. There's PowerShell, CMD.exe, and a half dozen Linuxes. You can even make your own custom Linux Distro for your company if you like. And there's a whole world of 3rd party Consoles that sit on top of/replace conhost.exe so you can have consoles with tabs, cool fonts, ones based on web tech, whatever! You can even choose WSL/bash as your default shell in Visual Studio Code if you'd like with Ctrl+~.

Hope this gets you started with Linux on Windows. What did I miss? Sound off in the comments.

Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Terminus and FluentTerminal are the start of a world of 3rd party OSS console replacements for Windows

Nov 9, 2018

Description:

Folks have been trying to fix supercharge the console/command line on Windows since Day One. There's a ton of open source projects over the year that try to take over or improve on "conhost.exe" (the thing that handles consoles like Bash/PowerShell/cmd on Windows). Most of these 3rd party consoles have weird or subtle issues. For example, I like Hyper as a terminal but it doesn't support Ctrl-C at the command line. I use that hotkey often enough that this small bug means I just won't use that console at all.

Per the CommandLine blog:

One of those weaknesses is that Windows tries to be "helpful" but gets in the way of alternative and 3rd party Console developers, service developers, etc. When building a Console or service, developers need to be able to access/supply the communication pipes through which their Terminal/service communicates with command-line applications. In the *NIX world, this isn't a problem because *NIX provides a "Pseudo Terminal" (PTY) infrastructure which makes it easy to build the communication plumbing for a Console or service, but Windows does not...until now!

Looks like the Windows Console team is working on making 3rd party consoles better by creating this new PTY mechanism:

We've heard from many, many developers, who've frequently requested a PTY-like mechanism in Windows - especially those who created and/or work on ConEmu/Cmder, Console2/ConsoleZ, Hyper, VSCode, Visual Studio, WSL, Docker, and OpenSSH.

Very cool! Until it's ready I'm going to continue to try out new consoles. A lot of people will tell you to use the cmder package that includes ConEmu. There's a whole world of 3rd party consoles to explore. Even more fun are the choices of color schemes and fonts to explore.

For a while I was really excited about Hyper. Hyper is - wait for it - an electron app that uses HTML/CSS for the rendering of the console. This is a pretty heavyweight solution to the rendering that means you're looking at 200+ megs of memory for a console rather than 5 megs or so for something native. However, it is a clever way to just punt and let a browser renderer handle all the complex font management. For web-folks it's also totally extensible and skinnable.

As much as I like Hyper and its look, the inability to support hitting "Ctrl-C" at the command line is just too annoying. It appears it's a very well-understood issue that will ultimately be solved by the ConPTY work as the underlying issue is a deficiency in the node-pty library. It's also a long-running issue in the VS Code console support. You can watch the good work that's starting in this node-pty PR that will fix a lot of issues for node-based consoles.

Until this all fixes itself, I'm personally excited (and using) these two terminals for Windows that you may not have heard of.

Terminus

Terminus is open source over at https://github.com/Eugeny/terminus and works on any OS. It's immediately gorgeous, and while it's in alpha, it's very polished. Be sure to explore the settings and adjust things like Blur/Fluent, Themes, opacity, and fonts. I'm using FiraCode Retina with Ligatures for my console and it's lovely. You'll have to turn ligature support on explicitly under Settings | Appearance.

Terminus is a lovely console replacement

Terminus also has some nice plugins. I've added Altair, Clickable-Links, and Shell-Selector to my loadout. The shell selector makes it easy on Windows 10 to have PowerShell, Cmd, and Ubuntu/Bash open all at the same time in multiple tabs.

I did do a little editing of the default config file to set up Ctrl-T for new tab and Ctrl-W for close-tab for my personal taste.

FluentTerminal

FluentTerminal is a Terminal Emulator based on UWP. Its memory usage on my machine is about 1/3 of Terminus and under 100 megs. As a Windows 10 UWP app it looks and feels very native. It supports ALT-ENTER Fullscreen, and tabs for as many consoles as you'd like. You can right-click and color specific tabs which was a nice surprise and turned out to be useful for on-the-fly categorization.

image

FluentTerminal has a nice themes setup and includes a half-dozen to start, plus supports imports.

It's not yet in the Windows Store (perhaps because it's in active development) but you can easily download a release and install it with a PowerShell install.ps1 script.

I have found the default Keybindings very intuitive with the usual Ctrl-T and Ctrl-W tab managers already set up, as well as Shift-Ctrl-T for opening a new tab for a specific shell profile (cmd, powershell, wsl, etc).

Both of these are great new entries in the 3rd party terminal space and I'd encourage you to try them both out and perhaps get involved on their respective GitHubs! It's a great time to be doing console work on Windows 10!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Updating my ASP.NET Website from .NET 2.2 Core Preview 2 to .NET 2.2 Core Preview 3

Nov 7, 2018

Description:

I've recently returned from a month in South Africa and I was looking to unwind while the jetlagged kids sleep. I noticed that .NET Core 2.2 Preview 3 came out while I wasn't paying attention. My podcast site runs on .NET Core 2.2 Preview 2 so I thought it'd be interesting to update the site. That means I'd need to install the new SDK, update the project references, ensure it builds in Azure DevOps's CI/CD Pipeline, AND deploys and runs in Azure.

Let's see how it goes. I'm a little out of it but I'm writing this blog post AS I DO THE WORK so you'll see my train of thought with no editing.

Ok, what version of .NET Core does this machine have?

C:\Users\scott> dotnet --version
2.2.100-preview2-009404
C:\Users\scott> dotnet tool update --global dotnet-outdated
Tool 'dotnet-outdated' was successfully updated from version '2.0.0' to version '2.1.0'.

Looks like I'm on Preview 2 as I guessed. I'll take a moment and upgrade one Global Tool I love - dotnet-outdated - in case it's been updated since I've been out. Looks like it has a minor update. Dotnet Outdated is a great utility for checking references and you should absolutely be using it or another tool like NuKeeper or Dependabot.

I'll head over to https://www.microsoft.com/net/download/dotnet-core/2.2 and get .NET Core 2.2 Preview 3. I'm building on Windows but I may want to update my Linux (WSL) install and Docker images later.

All right, installed. Check it with dotnet --version to confirm it's correct:

C:\Users\scott> dotnet --version
2.2.100-preview3-009430

Let's try to build my podcast website. Note that it consists of two projects, the main website on ASP.NET Core, and Unit Tests with XUnit and Selenium.

D:\github\hanselminutes-core [main ≡]> dotnet build
Microsoft (R) Build Engine version 15.9.8-preview+g0a5001fc4d for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restoring packages for D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj...
Restore completed in 80.05 ms for D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj.
Restore completed in 25.4 ms for D:\github\hanselminutes-core\hanselminutes.core\hanselminutes-core.csproj.
D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj : error NU1605: Detected package downgrade: Microsoft.AspNetCore.App from 2.2.0-preview3-35497 to 2.2.0-preview2-35157. Reference the package directly from the project to select a different version. [D:\github\hanselminutes-core\hanselminutes-core.sln]

The dotnet build fails, which make sense, because it's saying hey, you're asking for 2.2 Preview 2 but I've got Preview 3 all ready for you!

Detected package downgrade: Microsoft.AspNetCore.App from 2.2.0-preview3-35497 to 2.2.0-preview2-35157

Let's see what "dotnet outdated" says about this!

dotnet outdated says there's a few packages I need to update

Cool! I love these dependency tools and the community around them. You can see that it's noticed the Preview 2 -> Preview 3 opportunity, as well as a few other smaller minor or patch version bumps.

I can run dotnet outdated -u to automatically update the references, but I'll want to treat the "reference" of "Microsoft.AspNetCore.App" a little differently and use implicit versioning. You don't want to include a specific version - as I did - for this package.

Per the docs for .NET Core 2.1 and up:

Remove the "Version" attribute on the package reference to Microsoft.AspNetCore.App. Projects which use <Project Sdk="Microsoft.NET.Sdk.Web"> do not need to set the version. The version will be implied by the target framework and selected to best match the way ASP.NET Core 2.1 works. (See below for more information.)

Doing this also fixes the build because it picks up the latest 2.2 SDK automatically! Now I'll run my Unit Tests (with code coverage) and see how it works. Cool all tests pass (including Selenium).

88% Code Coverage

It builds locally, will it build in Azure DevOps when I check it in to GitHub?

Azure DevOps

I added a .NET Core SDK installer step when I set up my Azure Dev Ops Pipeline. This is where I'm explicitly installing a Preview version of the .NET Core SDK.

While I'm in here I noticed the Azure DevOps pipeline was using NuGet 4.4.1. I run "nuget update -self" on my local machine and got 4.7.1, so I updated that version as well to make the CI/CD pipeline reflect my own machine.

Now I'll git add, git commit (using verified/signed GitHub commits with my PGP Key and Yubikey):

D:\github\hanselminutes-core [main ≡ +0 ~2 -0 !]> git add .
D:\github\hanselminutes-core [main ≡ +0 ~2 -0 ~]> git commit -m "bump to 2.2 Preview 3"
[main 7a84bc7] bump to 2.2 Preview 3
2 files changed, 16 insertions(+), 13 deletions(-)

Add in a Git Push...and I can see the build start in Azure DevOps:

CI/CD pipeline build starting

Cool. While that's building, I'll make sure my existing Azure App Service (website) installation is ready to receive the deployment (assuming the build succeeds). Since I'm using an ASP.NET Core Preview build I'll want to make sure I have the Preview Site Extension installed, per the docs.

If I visit the Site Extensions menu item in the Azure Portal I can see I've got .NET Core 2.2 Preview 2, but there's an update available, as expected.

Update Available

I'll click this extension and then click Update. This extension's job is to make sure the App Service gets Preview versions of the .NET Core SDK. Only released (GA - general availability) SDKs are installed by default.

OK, .NET Core 2.2 is all updated in Azure, so I'll confirm that it's deployed as well in Azure DevOps. Yes, I'm deploying into Production without a net. Seriously, though, if there is an issue I'll just rollback. If I was deeply serious about downtime I'd be doing all this in Staging.

image

Successful local test, successful CI/SD build and test, successful deployment, and the site is back up now running on ASP.NET Core 2.2 Preview 3. It took about 45 min to do the work while simultaneously taking these screenshots and writing this blog post during the slow parts.

Good night everyone!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

.NET Core and .NET Standard for IoT - The potential of the Meadow Kickstarter

Nov 1, 2018

Description:

I saw this Kickstarter today - Meadow: Full-stack .NET Standard IoT platform. It says that "It combines the best of all worlds; it has the power of RaspberryPi, the computing factor of an Arduino, and the manageability of a mobile app. And best part? It runs full .NET Standard on real IoT hardware."

NOTE: I don't have any relationship with the company/people behind this Kickstarter, but it seems interesting so I'm sharing it with you. As with all Kickstarters, it's not a pre-order, it's an investment that may not pan out, so always be prepared to lose your investment. I lost mine with the .NET "Agent" SmartWatch even though all signs pointed to success.

Meadow IoT KickstarterI've done IoT work on Raspberry Pis which is much easier lately with the emerging community support for ARM32, Raspberry Pis, and cool stuff happening on Windows 10 IoT. I've written on how easy it is to get running on Raspberry Pi. I was even able to get my own podcast website running on Raspberry Pi and in Docker.

This Meadow Kickstarter says it's running on the Mono Runtime and will support the .NET Standard 2.0 API. That means that you likely already know how to program to it. Most libraries on NuGet are .NET Standard compliant so a ton of open source software should "Just Work" on any solution that supports .NET Standard.

One thing that seems interesting about Meadow is this sentence: "The power of Raspberry Pi in the computing factor of an Arduino, and the manageability of a mobile app."

Raspberry Pis are full-on computers. Ardunios are small little (mostly) single-tasked devices. Microcomputer vs Microcontroller. It's overkill to have Ubuntu on a computer just to turn on a device. You usually want IoT devices to have as small a surface area as possible.

Meadow says "Meadow has been designed to run on a variety of microcontrollers, and our first board is based on STMicroelectronics' flagship STM32F7 MCU. The Meadow F7 Micro board is an embeddable module that's based on Adafruit Feather form factor." Remember, we are talking megs not gigs here. "We've paired the STM32F7 with 32MB of flash storage and 16MB of RAM, so you can run pretty much anything you can think of building." This is just a 216MHz ARM board.

Be sure to scroll all the way down to the bottom of the page as they outline risks as well as what's left to be done.

What do you think? While you are at it, check out (total coincidence) our sponsor this week is Intel IoT! They have some great developer kits.

Sponsor: Reduce time to market and simplify IOT development using developer kits built on Intel Atom®, Intel® Core™ and Intel® Xeon® processors and tools such as Intel® System Studio and Arduino Create*
© 2018 Scott Hanselman. All rights reserved.
     

Side by Side user scoped .NET Core installations on Linux with dotnet-install.sh

Oct 30, 2018

Description:

I can run .NET Core on Windows, Mac, or a dozen Linuxes. On my Ubuntu installation I can check what version I have installed and where it is like this:

$ dotnet --version 2.1.403 $ which dotnet /usr/bin/dotnet

If we interrogate that dotnet file we see it's a link to elsewhere:

$ ls -alogF /usr/bin/dotnet lrwxrwxrwx 1 22 Sep 19 03:10 /usr/bin/dotnet -> ../share/dotnet/dotnet*

If we head over there we see similar stuff as we do on Windows.

Side by side DotNet installs

Basically c:\program files\dotnet is the same as /share/dotnet.

$ cd ../share/dotnet $ ll total 136 drwxr-xr-x 1 root root 4096 Oct 5 19:47 ./ drwxr-xr-x 1 root root 4096 Aug 1 17:44 ../ drwxr-xr-x 1 root root 4096 Feb 13 2018 additionalDeps/ -rwxr-xr-x 1 root root 105704 Sep 19 03:10 dotnet* drwxr-xr-x 1 root root 4096 Feb 13 2018 host/ -rw-r--r-- 1 root root 1083 Sep 19 03:10 LICENSE.txt drwxr-xr-x 1 root root 4096 Oct 5 19:48 sdk/ drwxr-xr-x 1 root root 4096 Aug 1 18:07 shared/ drwxr-xr-x 1 root root 4096 Feb 13 2018 store/ -rw-r--r-- 1 root root 27700 Sep 19 03:10 ThirdPartyNotices.txt $ ls sdk 2.1.4 2.1.403 NuGetFallbackFolder $ ls shared Microsoft.AspNetCore.All Microsoft.AspNetCore.App Microsoft.NETCore.App $ ls shared/Microsoft.NETCore.App/ 2.0.5 2.1.5

Looking in directories works to figure out what SDKs and Runtime versions are installed, but the best way is to use the dotnet cli itself like this. 

$ dotnet --list-sdks 2.1.4 [/usr/share/dotnet/sdk] 2.1.403 [/usr/share/dotnet/sdk] $ dotnet --list-runtimes Microsoft.AspNetCore.All 2.1.5 [/usr/share/dotnet/shared/Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.5 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.0.5 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 2.1.5 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

There's great instructions on how to set up .NET Core on your Linux machines via Package Manager here.

Note that these installs of the .NET Core SDK are installed in /usr/share. I can use the dotnet-install.sh to do non-admin installs in my own user directory.

In order to gain more control and do things more manually, you can use this shell script here: https://dot.net/v1/dotnet-install.sh and its documentation is here at docs. For Windows there is also a PowerShell version https://dot.net/v1/dotnet-install.ps1

The main usefulness of these scripts is in automation scenarios and non-admin installations. There are two scripts: One is a PowerShell script that works on Windows. The other script is a bash script that works on Linux/macOS. Both scripts have the same behavior. The bash script also reads PowerShell switches, so you can use PowerShell switches with the script on Linux/macOS systems.

For example, I can see all the current .NET Core 2.1 versions at https://www.microsoft.com/net/download/dotnet-core/2.1 and 2.2 at https://www.microsoft.com/net/download/dotnet-core/2.2 - the URL format is regular. I can see from that page that at the time of this blog post, v2.1.5 is both Current (most recent stable) and also LTS (Long Term Support).

I'll grab the install script and chmod +x it. Running it with no options will get me the latest LTS release.

$ wget https://dot.net/v1/dotnet-install.sh --2018-10-31 15:41:08-- https://dot.net/v1/dotnet-install.sh Resolving dot.net (dot.net)... 104.214.64.238 Connecting to dot.net (dot.net)|104.214.64.238|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 30602 (30K) [application/x-sh] Saving to: ‘dotnet-install.sh’

I like the "-DryRun" option because it will tell you what WILL happen without doing it.

$ ./dotnet-install.sh -DryRun dotnet-install: Payload URL: https://dotnetcli.azureedge.net/dotnet/Sdk/2.1.403/dotnet-sdk-2.1.403-linux-x64.tar.gz dotnet-install: Legacy payload URL: https://dotnetcli.azureedge.net/dotnet/Sdk/2.1.403/dotnet-dev-ubuntu.16.04-x64.2.1.403.tar.gz dotnet-install: Repeatable invocation: ./dotnet-install.sh --version 2.1.403 --channel LTS --install-dir <auto>

If I use the dotnet-install script can have multiple copies of the .NET Core SDK installed in my user folder at ~/.dotnet. It all depends on your PATH. Note this as I use ~/.dotnet for my .NET Core install location and run dotnet --list-sdks. Make sure you know what your PATH is and that you're getting the .NET Core you expect for your user.

$ which dotnet /usr/bin/dotnet $ export PATH=/home/scott/.dotnet:$PATH $ which dotnet /home/scott/.dotnet/dotnet $ dotnet --list-sdks 2.1.402 [/home/scott/.dotnet/sdk]

Now I will add a few more .NET Core SDKs side-by-side with the dotnet-install.sh script. Remember again, these aren't .NET's installed with apt-get which would be system level and by run with sudo. These are user profile installed versions.

There's really no reason to do side by side at THIS level of granularity, but it makes the point.

$ dotnet --list-sdks 2.1.302 [/home/scott/.dotnet/sdk] 2.1.400 [/home/scott/.dotnet/sdk] 2.1.401 [/home/scott/.dotnet/sdk] 2.1.402 [/home/scott/.dotnet/sdk] 2.1.403 [/home/scott/.dotnet/sdk]

When you're doing your development, you can use "dotnet new globaljson" and have each path/project request a specific SDK version.

$ dotnet new globaljson The template "global.json file" was created successfully. $ cat global.json { "sdk": { "version": "2.1.403" } }

Hope this helps!

Sponsor: Reduce time to market and simplify IOT development using developer kits built on Intel Atom®, Intel® Core™ and Intel® Xeon® processors and tools such as Intel® System Studio and Arduino Create*


© 2018 Scott Hanselman. All rights reserved.
     

ASP.NET Core 2.2 Parameter Transformers for clean URL generation and slugs in Razor Pages or MVC

Oct 25, 2018

Description:

I noticed that last week .NET Core 2.2 Preview 3 was released:

.NET Core 2.2 Preview 3 ASP.NET Core 2.2 Preview 3 Entity Framework 2.2 Preview 3

You can download and get started with .NET Core 2.2, on Windows, macOS, and Linux: .NET Core 2.2 Preview 3 SDK (includes the runtime) .NET Core 2.2 Preview 3 Runtime

Docker images are available at microsoft/dotnet for .NET Core and ASP.NET Core.

.NET Core 2.2 Preview 3 can be used with Visual Studio 15.9 Preview 3 (or later), Visual Studio for Mac and Visual Studio Code.

The feature I am most stoked about in ASP.NET 2.2 is a subtle one but I remember implementing it manually many times over the last 10 years. I'm happy to see it nicely integrated into ASP.NET Core's MVC and Razor Pages patterns.

ASP.NET Core 2.2 introduces the concept of Parameter Transformers to routing. Remember there isn't a directly relationship between what's in the URL/Address bar and what's on disk. The routing subsystem handles URLs coming in from the client and routes them to Controllers, but it also generates URLs (strings) when given an Controller and Action.

So if I'm using Razor Pages and I have a file Pages/FancyPants.cshtml I can get to it by default at /FancyPants. I can also "ask" for the URL when I'm creating anchors/links in my Razor Page:

<a class="nav-link text-dark" asp-area="" asp-page="/fancypants">Fancy Pants</a>

Here I'm asking for the page. That asp-page attribute points to a logical page, not a physical file.

 

We can make an IOutboundParameterTransformer that changes URLs to a format (for example) like a WordPress standard slug in the two-words format.

public class SlugifyParameterTransformer : IOutboundParameterTransformer { public string TransformOutbound(object value) { if (value == null) { return null; } // Slugify value return Regex.Replace(value.ToString(), "([a-z])([A-Z])", "$1-$2").ToLower(); } }

Then you let the ASP.NET Pipeline know about this transformer, either in Razor Pages...

services.AddMvc() .SetCompatibilityVersion(CompatibilityVersion.Version_2_2) .AddRazorPagesOptions(options => { options.Conventions.Add( new PageRouteTransformerConvention( new SlugifyParameterTransformer())); });

or in ASP.NET MVC:

services.AddMvc(options => { options.Conventions.Add(new RouteTokenTransformerConvention( new SlugifyParameterTransformer())); });

Now when I run my application, I get my routing both coming in (from the client web browser) and going out (generated via Razor pages. Here I'm hovering over the "Fancy Pants" link at the top of the page. Notice that it's generated /fancy-pants as the URL.

image

So that same code from above that generates anchor tags <a href= gives me the expected new style of URL, and I only need to change it in one location.

There is also a new service called LinkGenerator that's a singleton you can call outside the context of an HTTP call (without an HttpContext) in order to generate a URL string.

return _linkGenerator.GetPathByAction( httpContext, controller: "Home", action: "Index", values: new { id=42 });

This can be useful if you are generating URLs outside of Razor or in some Middleware. There's a lot more little subtle improvements in ASP.NET Core 2.2, but this was the one that I will find the most useful in the near term.

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Dependabot for .NET Core dependency tracking in GitHub

Oct 22, 2018

Description:

Bump Microsoft.ApplicationInsights.AspNetCore from 2.5.0-beta1 to 2.5.0-beta2I've been exploring automated dependency tracking lately. I usually use my podcast's ASP.NET Core website that I host on Github as a guinea pig. I tried Nukeeper and the dotnet outdated global tool - both of which are fantastic and worth exploring.

This week I'm trying Dependbot. I have no relationship with this company. Public repos and personal account repos are free and their pricing is very clear and organization accounts start at just $15 with a free trial.

I'm really impressed with how clever Dependabot is. It's almost like a person in its behavior. Yes, I realize that's kind of the point, but it's no less surprising to see. A well-written bot is a joy to behold.

For example, here is a PR (Pull Request) where Dependbot says "Bumps Microsoft.ApplicationInsights.AspNetCore from 2.5.0-beta1 to 2.5.0-beta2."

Basic stuff, right? But that's not all.

It not only does the basics where it noticed that a version bump occurred in a NuGet package, but it also copied the release notes from that NuGet package's release on GitHub! It included links to what was fixed between versions, links to the change logs, AND a complete linked commit list. I mean, that's just lovely.

A few days later, Dependabot went and closed the PR because the dependancy had updated (I was slow) then it commented telling me this PR was superseded by another.

Superseded by #20

Dependabot, like any good bot, also includes commands you can send to it via "Chats" in GitHub PR comments. You can tell it to use specific labels, control milestones. You can also control behavior in the Dependabot Dashboard and have it automerge things like minor versions, or just lock things down to security-only updates.

All in all, it's a very smart bot that supports basically all the languages. .NET support is in Beta, but I haven't had any issues with it. You should definitely check it out. And let me tell you, once you've got everything automated you'll wonder how you ever managed before.

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Customer Notes: Diagnosing issues under load of Web API app migrated to ASP.NET Core on Linux

Oct 16, 2018

Description:

When the engineers on the ASP.NET/.NET Core team talk to real customers about actual production problems they have, interesting stuff comes up. I've tried to capture a real customer interaction here without giving away their name or details.

The team recently had the opportunity to help a large customer of .NET investigate performance issues they’ve been having with a newly-ported ASP.NET Core 2.1 app when under load. The customer's developers are experienced with ASP.NET on Windows but in this case they needed help getting started with performance investigations with ASP.NET Core in Linux containers.

As with many performance investigations, there were a variety of issues contributing to the slowdowns, but the largest contributors were time spent garbage collecting (due to unnecessary large object allocations) and blocking calls that could be made asynchronous.

After resolving the technical and architectural issues detailed below, the customer's Web API went from only being able to handle several hundred concurrent users during load testing to being able to easily handle 3,000 and they are now running the new ASP.NET Core version of their backend web API in production. Problem Statement

The customer recently migrated their .NET Framework 4.x ASP.NET-based backend Web API to ASP.NET Core 2.1. The migration was broad in scope and included a variety of tech changes.

Their previous version Web API (We'll call it version 1) ran as an ASP.NET application (targeting .NET Framework 4.7.1) under IIS on Windows Server and used SQL Server databases (via Entity Framework) to persist data. The new (2.0) version of the application runs as an ASP.NET Core 2.1 app in Linux Docker containers with PostgreSQL backend databases (via Entity Framework Core). They used Nginx to load balance between multiple containers on a server and HAProxy load balancers between their two main servers. The Docker containers are managed manually or via Ansible integration for CI/CD (using Bamboo).

Although the new Web API worked well functionally, load tests began failing with only a few hundred concurrent users. Based on current user load and projected growth, they wanted the web API to support at least 2,000 concurrent users. Load testing was done using Visual Studio Team Services load tests running a combination of web tests mimicking users logging in, doing the stuff of their business, activating tasks in their application, as well as pings that the Mobile App's client makes regularly to check for backend connectivity. This customer also uses New Relic for application telemetry and, until recently, New Relic agents did not work with .NET Core 2.1. Because of this, there was unfortunately no app diagnostic information to help pinpoint sources of slowdowns. Lessons Learned Cross-Platform Investigations

One of the most interesting takeaways for me was not the specific performance issues encountered but, instead, the challenges this customer had working in a Linux environment. The team's developers are experienced with ASP.NET on Windows and comfortable debugging in Visual Studio. Despite this, the move to Linux containers has been challenging for them.

Because the engineers were unfamiliar with Linux, they hired a consultant to help deploy their Docker containers on Linux servers. This model worked to get the site deployed and running, but became a problem when the main backend began exhibiting performance issues. The performance problems only manifested themselves under a fairly heavy load, such that they could not be reproduced on a dev machine. Up until this investigation, the developers had never debugged on Linux or inside of a Docker container except when launching in a local container from Visual Studio with F5. They had no idea how to even begin diagnosing issues that only reproduced in their staging or production environments. Similarly, their dev-ops consultant was knowledgeable about Linux infrastructure but not familiar with application debugging or profiling tools like Visual Studio.

The ASP.NET team has some documentation on using PerfCollect and PerfView to gather cross-platform diagnostics, but the customer's devs did not manage to find these docs until they were pointed out. Once an ASP.NET Core team engineer spent a morning showing them how to use PerfCollect, LLDB, and other cross-platform debugging and performance profiling tools, they were able to make some serious headway debugging on their own. We want to make sure everyone can debug .NET Core on Linux with LLDB/SOS or remotely with Visual Studio as easily as possible.

The ASP.NET Core team now believes they need more documentation on how to diagnose issues in non-Windows environments (including Docker) and the documentation that already exists needs to be more discoverable. Important topics to make discoverable include PerfCollect, PerfView, debugging on Linux using LLDB and SOS, and possibly remote debugging with Visual Studio over SSH. Issues in Web API Code

Once we gathered diagnostics, most of the perf issues ended up being common problems in the customer’s code.  The largest contributor to the app’s slowdown was frequent Generation 2 (Gen 2) GCs (Garbage Collections) which were happening because a commonly-used code path was downloading a lot of images (product images), converting those bytes into a base64 strings, responding to the client with those strings, and then discarding the byte[] and string. The images were fairly large (>100 KB), so every time one was downloaded, a large byte[] and string had to be allocated. Because many of the images were shared between multiple clients, we solved the issue by caching the base64 strings for a short period of time (using IMemoryCache). HttpClient Pooling with HttpClientFactory When calling out to Web APIs there was a pattern of creating new HttpClient instances rather than using IHttpClientFactory to pool the clients. Despite implementing IDisposable, it is not a best practice to dispose HttpClient instances as soon as they’re out of scope as they will leave their socket connection in a TIME_WAIT state for some time after being disposed. Instead, HttpClient instances should be re-used. Additional investigation showed that much of the application’s time was spent querying PostgresSQL for data (as is common). There were several underlying issues here. Database queries were being made in a blocking way instead of being asynchronous. We helped address the most common call-sites and pointed the customer at the AsyncUsageAnalyzer to identify other async cleanup that could help. Database connection pooling was not enabled. It is enabled by default for SQL Server, but not for PostgreSQL. We re-enabled database connection pooling. It was necessary to have different pooling settings for the common database (used by all requests) and the individual shard databases which are used less frequently. While the common database needs a large pool, the shard connection pools need to be small to avoid having too many open, idle connections. The Web API had a fairly ‘chatty’ interface with the database and made a lot of small queries. We re-worked this interface to make fewer calls (by querying more data at once or by caching for short periods of time). There was also some impact from having other background worker containers on the web API’s servers consuming large amounts of CPU. This led to a ‘noisy neighbor’ problem where the web API containers didn’t have enough CPU time for their work. We showed the customer how to address this with Docker resource constraints. Wrap Up

As shown in the graph below, at the end of our performance tuning, their backend was easily able to handle 3,000 concurrent users and they are now using their ASP.NET Core solution in production. The performance issues they saw overlapped a lot with those we’ve seen from other customers (especially the need for caching and for async calls), but proved to be extra challenging for the developers to diagnose due to the lack of familiarity with Linux and Docker environments.

Performance and Errors Charts look good, up and to the rightThroughput and Tests Charts look good, up and to the right

Some key areas of focus uncovered by this investigation were:

Being mindful of memory allocations to minimize GC pause times

Keeping long-running calls non-blocking/asynchronous

Minimizing calls to external resources (such as other web services or the database) with caching and grouping of requests

Hope you find this useful! Big thanks to Mike Rousos from the ASP.NET Core team for his work and analysis!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

New prescriptive guidance for Open Source .NET Library Authors

Oct 16, 2018

Description:

Open-source library guidanceThere's a great new bunch of guidance just published representing Best Practices for creating .NET Libraries. Best of all, it was shepherded by JSON.NET's James Newton-King. Who better to help explain the best way to build and publish a .NET library than the author of the world's most popular open source .NET library?

Perhaps you've got an open source (OSS) .NET Library on your GitHub, GitLab, or Bitbucket. Go check out the open-source library guidance.

These are the identified aspects of high-quality open-source .NET libraries: Inclusive - Good .NET libraries strive to support many platforms and applications. Stable - Good .NET libraries coexist in the .NET ecosystem, running in applications built with many libraries. Designed to evolve - .NET libraries should improve and evolve over time, while supporting existing users. Debuggable - .NET libraries should use the latest tools to create a great debugging experience for users. Trusted - .NET libraries have developers' trust by publishing to NuGet using security best practices.

The guidance is deep but also preliminary. As with all Microsoft Documentation these days it's open source in Markdown and on GitHub. If you've got suggestions or thoughts, share them! Be sure to sound off in the Feedback Section at the bottom of the guidance. James and the Team will be actively incorporating your thoughts.

Cross-platform targeting

Since the whole point of .NET Core and the .NET Standard is reuse, this section covers how and why to make reusable code but also how to access platform-specific APIs when needed with multi-targeting.

Strong naming

Strong naming seemed like a good idea but you should know WHY and WHEN to strong name. It all depends on your use case! Are you publishing internally or publically? What are your dependencies and who depends on you?

NuGet

When publishing on the NuGet public repository (or your own private/internal one) what do you need to know about SemVer 2.0.0? What about pre-release packages? Should you embed PDBs for easier debugging? Consider things like Dependencies, SourceLink, how and where to Publish and how Versioning applies to you and when (or if) you cause Breaking changes.

Also be sure to check out Immo's video on "Building Great Libraries with .NET Standard" on YouTube!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

C# and .NET Core scripting with the "dotnet-script" global tool

Oct 12, 2018

Description:

dotnet scriptYou likely know that open source .NET Core is cross platform and it's super easy to do "Hello World" and start writing some code.

You just install .NET Core, then "dotnet new console" which will generate a project file and basic app, then "dotnet run" will compile and run your app? The 'new' command will create all the supporting code, obj, and bin folders, etc. When you do "dotnet run" it actually is a combination of "dotnet build" and "dotnet exec whatever.dll."

What could be easier?

What about .NET Core as scripting?

Check out dotnet script:

C:\Users\scott\Desktop\scriptie> dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
C:\Users\scott\Desktop\scriptie>copy con helloworld.csx
Console.WriteLine("Hello world!");
^Z
1 file(s) copied.
C:\Users\scott\Desktop\scriptie>dotnet script helloworld.csx
Hello world!

NOTE: I was a little tricky there in step two. I did a "copy con filename" to copy from the console to the destination file, then used Ctrl-Z to finish the copy. Feel free to just use notepad or vim. That's not dotnet-script-specific, that's Hanselman-specific.

Pretty cool eh? If you were doing this in Linux or OSX you'll need to include a "shebang" as the first line of the script. This is a standard thing for scripting files like bash, python, etc.

#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");

This lets the operating system know what scripting engine handles this file.

If you you want to refer to a NuGet package within a script (*.csx) file, you'll use the Roslyn #r syntax:

#r "nuget: AutoMapper, 6.1.0"
Console.WriteLine("whatever);

Even better! Once you have "dotnet-script" installed as a global tool as above:

dotnet tool install -g dotnet-script

You can use it as a REPL! Finally, the C# REPL (Read Evaluate Print Loop) I've been asking for for only a decade! ;)

C:\Users\scott\Desktop\scriptie>dotnet script
> 2+2
4
> var x = "scott hanselman";
> x.ToUpper()
"SCOTT HANSELMAN"

This is super useful for a learning tool if you're teaching C# in a lab/workshop situation. Of course you could also learn using http://try.dot.net in the browser as well.

In the past you may have used ScriptCS for C# scripting. There's a number of cool C#/F# scripting options. This is certainly not a new thing:

Mono scripting ScriptCS (Filip was core on this) Roslyn Scripting APIs (Roslyn is underneath most scripting environments) CS-Script

In this case, I was very impressed with the easy of dotnet-script as a global tool and it's simplicity. Go check out https://github.com/filipw/dotnet-script and try it out today!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Using Enhanced Mode Ubuntu 18.04 for Hyper-V on Windows 10

Oct 10, 2018

Description:

I run Windows as my daily driver but I use WSL (Windows Subsystem for Linux) all day long but WSL is just the command-line and has some perf issues with heavy file system work. I use Docker for Windows which works amazingly and has it good perf but sometimes I want to test on a full Ubuntu Desktop.

ASIDE: No joke. My Linux/Ubuntu bona fides go back a while. Here's me installing Ubuntu 10.4 on Windows 7 over 8 years ago. Umuntu ngumuntu ngabantu!

To be frank, historically Ubuntu has sucked on Window's Hyper-V. If you wanted to get a higher (read: usable) resolution it would take a miracle. If you wanted shared clipboards or shared disk drives, well, again, miracle or a ton of manual set up. It's possible but it's not fun.

Why can't it be easy? Well, it is. I installed the Windows 10 "Fall Creators Update" - yes the name is stupid. It's Windows 10 "1809" - that's 2018 and the 9th month. Just type "Winver" from the Start menu. You may have "1803" from March. Go update.

Windows 10 includes Hyper-V Quick Create which has this suspiciously short list under "Select an operating system." Anytime a list has 1 or 2 items and some whitespace that means it will someday have n+1 list items.

Recently Ubuntu 18.04.1 LTS showed up in this list. You can quickly and easily create an Ubuntu VM from here and it's all handled, downloading, network switch, VM create, etc.

Create Virtual Machine

I dig it. So click create, start it up...get to the set up screen. Now, here, make sure you click "Require my password to login." What we want to do won't work with "Log in Automatically" and you don't want that anyway.

Setting up an Ubuntu VM

After you've created your VM and got it mostly setup, close the Hyper-V client window. Just X it out. The VM is still running of course.

Go over to Hyper-V Manager and right click on it and "Connect."

Connect to VM

You'll see a resolution dialog...pick one! Go crazy! Do be aware that there are issues on 4k display but you can adjust within Ubuntu itself.

Set Resolution

Now, BEFORE you click Connect, click "Show Options" and then "Local Resources." Under here, uncheck Smart Cards and Check "Drives."

Uncheck Smart Cards and Check Drives

Click OK and Connect...and you get this weird dialog! You're actually RDP'ing into Ubuntu! Rather than using the historical weird Hyper-V Client stuff to talk to Ubuntu and struggle with video cards and resolutions, here you are literally just Remote Desktoping into Ubuntu using integrated open source xrdp!

Login with your name and password (remember before when I said don't automatically login? This is why.)

Login to xrdp

What about Dynamic Resizing?

Here's an even better possible future. What we REALLY want (don't we, Dear Reader) is Dynamic Resolution and Resizing without Reconnection! Today you can just close and reconnect to change resolutions but I'd love to just resize the Ubuntu window like I do Windows 7/8/10 VM client windows.

The feature "Dynamic resolution update" was introduced in RDP 8.1. It enables to resize screen resolution on-the-fly.

Since we are using xrdp and that's open source over https://github.com/neutrinolabs/xrdp/ AND there's even a issue about this AND a lovely person has the code in their own branch and agreed to possibly upstream it maybe we can start using it and this great feature will just light up for folks who use Hyper-V Quick Create. Certainly we're talking weeks and months here (unless you want to help) but the lion's share of the work is done. I'm looking forward to resizing Ubuntu VMs dynamically.

What's in Enhanced Mode Today?

Back to today! You can read about how Linux VMs (Ubuntu or Arch) are set up in this GitHub repo https://github.com/Microsoft/linux-vm-tools You can set them up yourself with scripts, but the nice thing about Hyper-V Quick Create is that the work is done for us to make these "enhanced session" RDP-friendly VMs. No need to fear, you can just read the scripts yourself.

I can connect quickly and Enhanced Mode VMs give me:

a shared clipboard the resolution of my choice on connect fast painting/video/scrolling automatic shared-drives Smooth and automatic mouse capture

Fantastic.

Ubuntu on Windows 10

What about installing Visual Studio Code? Of course. And also .NET Core.NET Core, natch.

image

This took like 10 min and 8 of it was waiting for Hyper-V Create to download Ubuntu. Try it out!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

Troubleshooting Windows 10 Nearby Sharing and Bluetooth Antennas

Oct 5, 2018

Description:

wifi

When building my Ultimate Developer PC I picked this motherboard, and it's lovely.

ASUS ROG STRIX LGA2066 X299 ATX Motherboard - Good solid board with built in BT and Wifi, an M.2 heatsink included, 3x PCIe 3.0 x16 SafeSlots (supports triple @ x16/x16/x8), 1x PCIe 3.0 x4, 2x PCIe 3.0 x1 and a Max of 128 gigs of RAM. It also has 8x USB 3.1s and a USB C which is nice.

I put it all together and I've thrilled with the machine. However, recently I was trying to use the new Windows 10 "Nearby Devices" feature.

It's this cool feature that lets you share stuff to "Nearby Devices" - that means your laptop, other desktops, whatever. Similar to AirDrop, it solves that problem of moving stuff between devices without using an intermediate server.

You can turn it on in Settings on Windows 10 and decide if you want to receive data from everyone or just contacts.

Nearby Sharing

So I started using on my new Desktop, IRONHEART, but I kept getting this "Looking for nearby devices" dialog...and it would just do nothing.

Looking for Nearby Devices

It turns out that the ASUS Motherboard also comes with a Wi-Fi Antenna. I don't use Wifi (I'm wired) so I didn't bother attaching it. It seems that this antenna is also a Bluetooth antenna and if you plug it in you'll ACTUALLY GET A LOVELY BLUETOOTH SIGNAL. Who knew? ;)

Now I can easily right click on files in Explorer or Web Pages in Edge and transfer them between systems.

Sharing a file with Nearby Sharing

A few tips on Nearby Sharing

Make sure you know your visibility settings. From the Start Menu type "nearby sharing" and confirm them. Make sure the receiving device doesn't have "Focus Assist" on (via the Action Center in the lower right of the screen) or you might miss the notification. And if you're using a desktop like me, ahem, plug in your BT antenna

Hope this helps someone because Nearby Sharing is a great feature that I'm now using all the time.

Sponsor: Telerik DevCraft is the comprehensive suite of .NET and JavaScript components and productivity tools developers use to build high-performant, modern web, mobile, desktop apps and chatbots. Try it!
© 2018 Scott Hanselman. All rights reserved.
     

Headless CMS and Decoupled CMS in .NET Core

Oct 3, 2018

Description:

Headless by Wendy used under CC https://flic.kr/p/HkESxWI'm sure I'll miss some, so if I do, please sound off in the comments and I'll update this post over the next week or so!

Lately I've been noticing a lot of "Headless" CMSs (Content Management System). A ton, in fact. I wanted to explore this concept and see if it's a fad or if it's really something useful.

Given the rise of clean RESTful APIs has come the rise of Headless CMS systems. We've all evaluated CMS systems (ones that included both front- and back-ends) and found the front-end wanting. Perhaps it lacks flexibility OR it's way too flexible and overwhelming. In fact, when I wrote my podcast website I considered a CMS but decided it felt too heavy for just a small site.

A Headless CMS is a back-end only content management system (CMS) built from the ground up as a content repository that makes content accessible via a RESTful API for display on any device.

I could start with a database but what if I started with a CMS that was just a backend - a headless CMS. I'll handle the front end, and it'll handle the persistence.

Here's what I found when exploring .NET Core-based Headless CMSs. One thing worth noting, is that given Docker containers and the ease with which we can deploy hybrid systems, some of these solutions have .NET Core front-ends and "who cares, it returns JSON" for the back-end!

Lynicon

Lyncicon is literally implemented as a NuGet Library! It stores its data as structured JSON. It's built on top of ASP.NET Core and uses MVC concepts and architecture.

It does include a front-end for administration but it's not required. It will return HTML or JSON depending on what HTTP headers are sent in. This means you can easily use it as the back-end for your Angular or existing SPA apps.

Lyncion is largely open source at https://github.com/jamesej/lyniconanc. If you want to take it to the next level there's a small fee that gives you updated searching, publishing, and caching modules.

ButterCMS

ButterCMS is an API-based CMS that seamlessly integrates with ASP.NET applications. It has an SDK that drops into ASP.NET Core and also returns data as JSON. Pulling the data out and showing it in a few is easy.

public class CaseStudyController : Controller { private ButterCMSClient Client; private static string _apiToken = ""; public CaseStudyController() { Client = new ButterCMSClient(_apiToken); } [Route("customers/{slug}")] public async Task<ActionResult> ShowCaseStudy(string slug) { var json = await Client.ListPageAsync("customer_case_study", slug) dynamic page = ((dynamic)JsonConvert.DeserializeObject(json)).data.fields; ViewBag.SeoTitle = page.seo_title; ViewBag.FacebookTitle = page.facebook_open_graph_title; ViewBag.Headline = page.headline; ViewBag.CustomerLogo = page.customer_logo; ViewBag.Testimonial = page.testimonial; return View("Location"); } }

Then of course output into Razor (or putting all of this into a RazorPage) is simple:

<html> <head> <title>@ViewBag.SeoTitle</title> <meta property="og:title" content="@ViewBag.FacebookTitle" /> </head> <body> <h1>@ViewBag.Headline</h1> <img width="100%" src="@ViewBag.CustomerLogo"> <p>@ViewBag.Testimonial</p> </body> </html>

Butter is a little different (and somewhat unusual) in that their backend API is a SaaS (Software as a Service) and they host it. They then have SDKs for lots of platforms including .NET Core. The backend is not open source while the front-end is https://github.com/ButterCMS/buttercms-csharp.

Piranha CMS

Piranha CMS is built on ASP.NET Core and is open source on GitHub. It's also totally package-based using NuGet and can be easily started up with a dotnet new template like this:

dotnet new -i Piranha.BasicWeb.CSharp dotnet new piranha dotnet restore dotnet run

It even includes a new Blog template that includes Bootstrap 4.0 and is all set for customization. It does include optional lightweight front-end but you can use those as guidelines to create your own client code. One nice touch is that Piranha also includes image resizing and cropping.

Umbraco Headless

The main ASP.NET website currently uses Umbraco as its CMS. Umbraco is a well-known open source CMS that will soon include a Headless option for more flexibility. The open source code for Umbraco is up here https://github.com/umbraco.

Orchard Core

Orchard is a CMS with a very strong community and fantastic documentation. Orchard Core is a redevelopment of Orchard using open source ASP.NET Core. While it's not "headless" it is using a Decoupled Architecture. Nothing would prevent you from removing the UI and presenting the content with your own front-end. It's also cross-platform and container friendly.

Squidex

"Squidex is an open source headless CMS and content management hub. In contrast to a traditional CMS Squidex provides a rich API with OData filter and Swagger definitions." Squidex is build with ASP.NET Core and the CQRS pattern and works with both Windows and Linux on today's browsers.

Squidex is open source with excellent docs at https://docs.squidex.io. Docs are at https://docs.squidex.io. They are also working on a hosted version you can play with here https://cloud.squidex.io. Samples on how to consume it are here https://github.com/Squidex/squidex-samples.

The consumption is super clean:

[Route("/{slug},{id}/")] public async Task<IActionResult> Post(string slug, string id) { var post = await apiClient.GetBlogPostAsync(id); var vm = new PostVM { Post = post }; return View(vm); }

And then the View:

@model PostVM @{ ViewData["Title"] = Model.Post.Data.Title; } <div> <h2>@Model.Post.Data.Title</h2> @Html.Raw(Model.Post.Data.Text) </div>

What .NET Core Headless CMSs did I miss? Let me know.

This definitely isn't a fad. It makes a lot of sense to me architecturally. Given the proliferation of "backend as a service" systems, DocumentDBs like Cosmos and Mongo, it follows that a headless CMS could easily fit into my systems. One less DB schema to think about, no need to roll my own auth/auth.

*Photo "headless" by Wendy used under CC https://flic.kr/p/HkESxW

Sponsor: Telerik DevCraft is the comprehensive suite of .NET and JavaScript components and productivity tools developers use to build high-performant, modern web, mobile, desktop apps and chatbots. Try it!


© 2018 Scott Hanselman. All rights reserved.
     

Exploring .NET Core's SourceLink - Stepping into the Source Code of NuGet packages you don't own

Sep 28, 2018

Description:

According to https://github.com/dotnet/sourcelink, SourceLink "enables a great source debugging experience for your users, by adding source control metadata to your built assets."

Sounds fantastic. I download a NuGet to use something like Json.NET or whatever all the time, I'd love to be able to "Step Into" the source even if I don't have laying around. Per the GitHub, it's both language and source control agnostic. I read that to mean "not just C# and not just GitHub."

Visual Studio 15.3+ supports reading SourceLink information from symbols while debugging. It downloads and displays the appropriate commit-specific source for users, such as from raw.githubusercontent, enabling breakpoints and all other sources debugging experience on arbitrary NuGet dependencies. Visual Studio 15.7+ supports downloading source files from private GitHub and Azure DevOps (former VSTS) repositories that require authentication.

Looks like Cameron Taggart did the original implementation and then the .NET team worked with Cameron and the .NET Foundation to make the current version. Also cool.

Download Source and Continue Debugging

Let me see if this really works and how easy (or not) it is.

I'm going to make a little library using the 5 year old Pseudointernationalizer from here. Fortunately the main function is pretty pure and drops into a .NET Standard library neatly.

I'll put this on GitHub, so I will include "PublishRepositoryUrl" and "EmbedUntrackedSources" as well as including the PDBs. So far my CSPROJ looks like this:

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
<PublishRepositoryUrl>true</PublishRepositoryUrl>
<EmbedUntrackedSources>true</EmbedUntrackedSources>
<AllowedOutputExtensionsInPackageBuildOutputFolder>$(AllowedOutputExtensionsInPackageBuildOutputFolder);.pdb</AllowedOutputExtensionsInPackageBuildOutputFolder>
</PropertyGroup>
</Project>

Pretty straightforward so far. As I am using GitHub I added this reference, but if I was using GitLab or BitBucket, etc, I would use that specific provider per the docs.

<ItemGroup>
<PackageReference Include="Microsoft.SourceLink.GitHub" Version="1.0.0-beta-63127-02" PrivateAssets="All"/>
</ItemGroup>

Now I'll pack up my project as a NuGet package.

D:\github\SourceLinkTest\PsuedoizerCore [master ≡]> dotnet pack -c release
Microsoft (R) Build Engine version 15.8.166+gd4e8d81a88 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restoring packages for D:\github\SourceLinkTest\PsuedoizerCore\PsuedoizerCore.csproj...
Generating MSBuild file D:\github\SourceLinkTest\PsuedoizerCore\obj\PsuedoizerCore.csproj.nuget.g.props.
Restore completed in 96.7 ms for D:\github\SourceLinkTest\PsuedoizerCore\PsuedoizerCore.csproj.
PsuedoizerCore -> D:\github\SourceLinkTest\PsuedoizerCore\bin\release\netstandard2.0\PsuedoizerCore.dll
Successfully created package 'D:\github\SourceLinkTest\PsuedoizerCore\bin\release\PsuedoizerCore.1.0.0.nupkg'.

Let's look inside the .nupkg as they are just ZIP files. Ah, check out the generated *.nuspec file that's inside!

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
<metadata>
<id>PsuedoizerCore</id>
<version>1.0.0</version>
<authors>PsuedoizerCore</authors>
<owners>PsuedoizerCore</owners>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Package Description</description>
<repository type="git" url="https://github.com/shanselman/PsuedoizerCore.git" commit="35024ca864cf306251a102fbca154b483b58a771" />
<dependencies>
<group targetFramework=".NETStandard2.0" />
</dependencies>
</metadata>
</package>

See under repository it points back to the location AND commit hash for this binary! That means I can give it to you or a coworker and they'd be able to get to the source. But what's the consumption experience like? I'll go over and start a new Console app that CONSUMES my NuGet library package. To make totally sure that I don't accidentally pick up the source from my machine I'm going to delete the entire folder. This source code no longer exists on this machine.

I'm using a "local" NuGet Feed. In fact, it's just a folder. Check it out:

D:\github\SourceLinkTest\AConsumerConsole> dotnet add package PsuedoizerCore -s "c:\users\scott\desktop\LocalNuGetFeed"
Writing C:\Users\scott\AppData\Local\Temp\tmpBECA.tmp
info : Adding PackageReference for package 'PsuedoizerCore' into project 'D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj'.
log : Restoring packages for D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj...
info : GET https://api.nuget.org/v3-flatcontainer/psuedoizercore/index.json
info : NotFound https://api.nuget.org/v3-flatcontainer/psuedoizercore/index.json 465ms
log : Installing PsuedoizerCore 1.0.0.
info : Package 'PsuedoizerCore' is compatible with all the specified frameworks in project 'D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj'.
info : PackageReference for package 'PsuedoizerCore' version '1.0.0' added to file 'D:\github\SourceLinkTest\AConsumerConsole\AConsumerConsole.csproj'.

See how I used -s to point to an alternate source? I could also configure my NuGet feeds, be they local directories or internal servers with "dotnet new nugetconfig" and including my NuGet Servers in the order I want them searched.

Here is my little app:

using System;
using Utils;

namespace AConsumerConsole
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Pseudoizer.ConvertToFakeInternationalized("Hello World!"));
}
}
}

And the output is [Ħęľľő Ŵőřľđ! !!! !!!].

But can I step into it? I don't have the source remember...I'm using SourceLink.

In Visual Studio 2017 I confirm that SourceLink is enabled. This is the Portable PDB version of SourceLink, not the "SourceLink 1.0" that was "Enable Source Server Support." That only worked on Windows..

Enable Source Link Support

You'll also want to turn off "Just My Code" since, well, this isn't your code.

Disable Just My Code

Now I'll start a Debug Session in my consumer app and hit F11 to Step Into the Library whose source I do not have!

Source Link Will Download from The Internet

Fantastic. It's going to get the source for me! Without git cloning the repository it will seamlessly let me continue my debugging session.

The temporary file ended up in C:\Users\scott\AppData\Local\SourceServer\4bbf4c0dc8560e42e656aa2150024c8e60b7f9b91b3823b7244d47931640a9b9 if you're interested. I'm able to just keep debugging as if I had the source...because I do! It came from the linked source.

Debugging into a NuGet that I don't have the source for

Very cool. I'm going to keep digging into SourceLink and learning about it. It seems that if YOU have a library or published NuGet either inside your company OR out in the open source world that you absolutely should be using SourceLink.

You can even install the sourcelink global tool and test your .pdb files for greater insight.

D:\github\SourceLinkTest\PsuedoizerCore>dotnet tool install --global sourcelink
D:\github\SourceLinkTest\PsuedoizerCore\bin\release\netstandard2.0>sourcelink print-urls PsuedoizerCore.pdb
43c83e7173f316e96db2d8345a3f963527269651 sha1 csharp D:\github\SourceLinkTest\PsuedoizerCore\Psuedoizer.cs
https://raw.githubusercontent.com/shanselman/PsuedoizerCore/02c09baa8bfdee3b6cdf4be89bd98c8157b0bc08/Psuedoizer.cs
bfafbaee93e85cd2e5e864bff949f60044313638 sha1 csharp C:\Users\scott\AppData\Local\Temp\.NETStandard,Version=v2.0.AssemblyAttributes.cs
embedded

Think about how much easier consumers of your library will have it when debugging their apps! Your package is no longer a black box. Go set this up on your projects today.

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

A command-line REPL for RESTful HTTP Services

Sep 25, 2018

Description:

HTTP REPLMy that's a lot of acronyms. REPL means "Read Evaluate Print Loop." You know how you can run "python" and then just type 2+2 and get answer? That's a type of REPL.

The ASP.NET Core team is building a REPL that lets you explore and interact with your RESTful services. Ideally your services will have Swagger/OpenAPI available that describes the service. Right now this Http-REPL is just being developed and they're aiming to release it as a .NET Core Global Tool in .NET Core 2.2.

You can install global tools like this:

dotnet tool install -g nyancat

Then you can run "nyancat." Get a list of installed tools like this:

C:\Users\scott> dotnet tool list -g Package Id Version Commands -------------------------------------------------------------------- altcover.global 3.5.560 altcover dotnet-depends 0.1.0 dotnet-depends dotnet-httprepl 2.2.0-preview3-35304 dotnet-httprepl dotnet-outdated 2.0.0 dotnet-outdated dotnet-search 1.0.0 dotnet-search dotnet-serve 1.0.0 dotnet-serve git-status-cli 1.0.0 git-status github-issues-cli 1.0.0 ghi nukeeper 0.7.2 NuKeeper nyancat 1.0.0 nyancat project2015to2017.cli 1.8.1 csproj-to-2017

For the HTTP-REPL, since it's not yet released you have to point the Tool Feed to a daily build location, so do this:

dotnet tool install -g --version 2.2.0-* --add-source https://dotnet.myget.org/F/dotnet-core/api/v3/index.json dotnet-httprepl

Then run it with "dotnet httprepl." I'd like another name? What do you think? RESTy? POSTr? API Test? API View?

Here's an example run where I start up a Web API.

C:\SwaggerApp> dotnet httprepl (Disconnected)~ set base http://localhost:65369 Using swagger metadata from http://localhost:65369/swagger/v1/swagger.json http://localhost:65369/~ dir . [] People [get|post] Values [get|post] http://localhost:65369/~ cd People /People [get|post] http://localhost:65369/People~ dir . [get|post] .. [] {id} [get] http://localhost:65369/People~ get HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 Date: Wed, 26 Sep 2018 20:25:37 GMT Server: Kestrel Transfer-Encoding: chunked [ { "id": 1, "name": "Scott Hunter" }, { "id": 0, "name": "Scott Hanselman" }, { "id": 2, "name": "Scott Guthrie" } ]

Take a moment and read that. It can be a little confusing. It's not HTTPie, it's not Curl, but it's also not PostMan. it's something that you run and stay running if you're a command line person and enjoy that space. It's as if you "cd (change directory)" and "mount" a disk into your Web API.

You can use all the HTTP Verbs, and when POSTing you can set a default text editor and it will launch the editor with the JSON written for you! Give it a try!

A few gotchas/known issues:

You'll want to set a default Content-Type Header for your session. I think this should be default. set header Content-Type application/json If the HTTP REPL doesn't automatically detect your Swagger/OpenAPI endpoint, you'll need to set it manually: set base https://yourapi/api/v1/ set swagger https://yourapi/swagger.json I haven't figure out how to get it to use VS Code as its default editor. Likely because "code.exe" isn't a thing. (It uses a batch .cmd file, which the HTTP REPL would need to special case). For now, use an editor that's an EXE and point the HTTP REPL like this: pref set editor.command.default 'c:\notepad2.exe'

I'm really enjoy this idea. I'm curious how you find it and how you'd see it being used. Sound off in the comments.

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

Scripts to remove old .NET Core SDKs

Sep 20, 2018

Description:

That's a lot of .NET Core installations.NET Core is lovely. It's usage is skyrocketing, it's open source, and .NET Core 2.1 has some amazing performance improvements. Just upgrading from 2.0 to 2.1 gave Bing a 34% performance boost.

However, for those of us who are installing multiple .NET Core SDKs side by side have noticed that they add-up if you are installing daily builds or very often. As of 2.x, .NET Core doesn't yet have an "uninstall all" or "uninstall all previews" option. There will be work done in .NET Core 3.0 that will mitigate this cumulative effect when you have lots of installers.

If you're taking dailies and it's time to tidy up, the short answer per Damian Edwards is "Delete them all, then nuke the dotnet folder in program files, then install the latest version."

Here's a PowerShell Script you can run on Windows as admin that will aggressively uninstall .NET Core SDKs.

Note the match at the top. Depending on your goals, you might want to change it to "Microsoft .NET Core SDK 2.1" or just "Microsoft .NET Core SDK 2."

Once it's all removed, then add the latest from https://www.microsoft.com/net/download/archives

A list of .NET Core SDKs

Here's the script, which is an improvement on Andrew's comment here. You can improve it as it's on GitHub here https://github.com/shanselman/RemoveDotNetCoreSDKInstallers. This scripts currently requires you to hit YES as the MSIs elevate. It doesn't work right then you try /passive as a switch. I'm interesting if you can get a "torch all Core SDK installers and install LTS and Current" script working.$app = Get-WmiObject -Class Win32_Product | Where-Object { $_.Name -match "Microsoft .NET Core SDK" } Write-Host $app.Name Write-Host $app.IdentifyingNumber pushd $env:SYSTEMROOT\System32 $app.identifyingnumber |% { Start-Process msiexec -wait -ArgumentList "/x $_" } popd

This PowerShell is Windows-only, of course.

If you're on RHEL, Ubuntu/Debian, there are scripts here to try out https://github.com/dotnet/cli/tree/master/scripts/obtain/uninstall

Let me know if this script works for you.

Sponsor: Copy: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

Azure DevOps Continuous Build/Deploy/Test with ASP.NET Core 2.2 Preview in One Hour

Sep 18, 2018

Description:

Hanselminutes WebsiteI've been doing Continuous Integration and Deployment for well over 13 years. We used a lot of custom scripts and a lovely tool called CruiseControl.NET to check out, build, test, and deploy our code.

However, it's easy to get lulled into complacency. To get lazy. I don't set up Automated Continuous Integration and Deployment for all my little projects. But I should.

I was manually deploying a change to my podcast website this evening via a git deploy to Azure App Service. Pushing to Azure this way via Git uses "Kudu" to actually build the site. However, earlier this week I was also trying to update my site to .NET Core 2.2 which is in preview. Plus I have Unit Tests that aren't getting run during deploy.

So look at it this way. My simple little podcast website with a few tests and the desire to use a preview .NET Core SDK means I've outgrown a basic "git push to prod" for deploy.

I remembered that Azure DevOps (formerly VSTS) is out and offers free unlimited minutes for open source projects. I have no excuse for my sloppy builds and manual deploys. It also has unlimited free private repos, although I'm happy at GitHub and have no reason to move.

It usually takes me 5-10 minutes for a manual build/test/deploy, so I gave myself an hour to see if I could get this same process automated in Azure DevOps. I've never used this before and I wanted to see if I could do it quickly, and if it was intuitive.

Let's review my goals.

My source is in GitHub Build my ASP.NET Core 2.2 Web Site I want to build with .NET Core 2.2 which is currently in Preview. Run my xUnit Unit Tests I have some Selenium Unit Tests that can't run in the cloud (at least, I haven't figured it out yet) so I need them skipped. Deploy the resulting site to product in my Azure App Service

Cool. So I make a project and point Azure DevOps at my GitHub.

Azure DevOps: Source code in GitHub

They have a number of starter templates, so I was pleasantly surprised I didn't need manually build my Build Configuration myself. I'll pick ASP.NET app. I could pick Azure Web App for ASP.NET but I wanted a little more control.

Select a template

Now I've got a basic build pipeline. You can see it will use NuGet, get the packages, build the app, test the assemblies (if there are tests...more on that later) and the publish (zip) the build artifacts.

Build Pipeline

I then clicked Save & Queue...and it failed. Why? It says that I'm targeting .NET Core 2.2 and it doesn't support anything over 2.1. Shoot.

Agent says it doesn't support .NET Core 2.2

Fortunately there's a pipeline element that I can add called ".NET Core Tool Installer" that will get specific versions of the .NET Core SDK.

NOTE: I've emailed the team that ".NET Tool Installer" is the wrong name. A .NET Tool is a totally different thing. This task should be called the ".NET Core SDK Installer." Because it wasn't, it took me a minute to find it and figure out what it does.

I'm using the SDK Agent version 2.22.2.100-preview2-009404 so I put that string into the properties.

Install the .NET Core SDK custom version

At this point it builds, but I get a test error.

There's two problems with the tests. When I look at the logs I can see that the "testadapter.dll" that comes with xunit is mistakenly being pulled into the test runner! Why? Because the "Test Files" spec includes a VERY greedy glob in the form of **\*test*.dll. Perhaps testadapter shouldn't include the word test, but then it wouldn't be well-named.

**\$(BuildConfiguration)\**\*test*.dll
!**\obj\**

My test DLLs are all named with "tests" in the filename so I'll change the glob to "**\$(BuildConfiguration)\**\*tests*.dll" to cast a less-wide net.

Screenshot (45)

I have four Selenium Tests for my ASP.NET Core site but I don't want them to run when the tests are run in a Docker Container or, in this case, in the Cloud. (Until I figure out how)

I use SkippableFacts from XUnit and do this:

public static class AreWe
{
public static bool InDockerOrBuildServer {
get {
string retVal = Environment.GetEnvironmentVariable("DOTNET_RUNNING_IN_CONTAINER");
string retVal2 = Environment.GetEnvironmentVariable("AGENT_NAME");
return (
(String.Compare(retVal, Boolean.TrueString, ignoreCase: true) == 0)
||
(String.IsNullOrWhiteSpace(retVal2) == false));
}
}
}

Don't tease me. I like it. Now I can skip tests that I don't want running.

if (AreWe.InDockerOrBuildServer) return;

Now my tests run and I get a nice series of charts to show that fact.

22 tests, 4 skipped

I have it building and tests running.

I could add the Deployment Step to the Build but Azure DevOps Pipelines includes a better way. I make a Release Pipeline that is separate. It takes Artifacts as input and runs n number of Stages.

Creating a new Release Pipeline

I take the Artifact from the Build (the zipped up binaries) and pass them through the pipeline into the Azure App Service Deploy step.

Screenshot (49)

Here's the deployment in progress.

Manually Triggered Release

Cool! Now that it works and deploys, I can turn on Continuous Integration Build Triggers (via an automatic GitHub webhook) as well as Continuous Deployment triggers.

Continuous Deployment

Azure DevOps even includes badges that I can add to my readme.md so I always know by looking at GitHub if my site builds AND if it has successfully deployed.

4 releases, the final one succeeded

Now I can see each release as it happens and if it's successful or not.

Build Succeeded, Never Deployed

To top it all off, now that I have all this data and these pipelines, I even put together a nice little dashboard in about a minute to show Deployment Status and Test Trends.

My build and deployment dashboard

When I combine the DevOps Dashboard with my main Azure Dashboard I'm amazed at how much information I can get in so little effort. Consider that my podcast (my little business) is a one-person shop.

Azure Dashboard

And now I have a CI/CD pipeline with integrated testing gates that deploys worldwide. Many years ago this would have required a team and a lot of custom code.

Today it took an hour. Awesome.

I check into GitHub, kicks off a build, tests, emails me the results, and deploys the website if everything is cool. Of course, if I had another team member I could put in deployment gates or reviews, etc.

Sponsor: Copy: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

A complete containerized .NET Core Application microservice that is as small as possible

Sep 14, 2018

Description:

OK, maybe not technically a microservice, but that's a hot buzzword these days, right? A few weeks ago I blogged about Improvements on ASP.NET Core deployments on Zeit's now.sh and making small container images. By the end I was able to cut my container size in half.

The trimming I was using is experimental and very aggressive. If you app loads things at runtime - like ASP.NET Razor Pages sometimes does - you may end up getting weird errors at runtime when a Type is missing. Some types may have been trimmed away!

For example:

fail: Microsoft.AspNetCore.Server.Kestrel[13]
Connection id "0HLGQ1DIEF1KV", Request id "0HLGQ1DIEF1KV:00000001": An unhandled exception was thrown by the application.
System.TypeLoadException: Could not load type 'Microsoft.AspNetCore.Diagnostics.IExceptionHandlerPathFeature' from assembly 'Microsoft.Extensions.Primitives, Version=2.1.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.HostFiltering.HostFilteringMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Hosting.Internal.HostingApplication.ProcessRequestAsync(Context context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)

Yikes!

I'm doing a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example. Note how he's doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy's csproj).

I'm adding the Tree Trimming Linker in the Dockerfile, so the trimming happens when the container image is built. I'm using the dotnet command to "dotnet add package ILLink.Tasks. This means I don't need to reference the linker package at development time - it's all at container build time.

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

I did end up hitting this bug in the Linker (it's not Released) but there's an easy workaround. I just need to set the property CrossGenDuringPublish to false in the project file.

If you look at the Advanced Instructions for the Linker you can see that you can "root" types or assemblies. Root means "don't mess with these or stuff that hangs off them." So I just need to exercise my app at runtime and make sure that all the types that my app needs are available, but no unnecessary ones.

I added the Assemblies I wanted to keep (not remove) while trimming/linking to my project file:

<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp2.1</TargetFramework>
<CrossGenDuringPublish>false</CrossGenDuringPublish>
</PropertyGroup>

<ItemGroup>
<LinkerRootAssemblies Include="Microsoft.AspNetCore.Mvc.Razor.Extensions;Microsoft.Extensions.FileProviders.Composite;Microsoft.Extensions.Primitives;Microsoft.AspNetCore.Diagnostics.Abstractions" />
</ItemGroup>

<ItemGroup>
<!-- this can be here, or can be done all at runtime in the Dockerfile -->
<!-- <PackageReference Include="ILLink.Tasks" Version="0.1.5-preview-1841731" /> -->
<PackageReference Include="Microsoft.AspNetCore.App" />
</ItemGroup>

</Project>

My strategy for figuring out which assemblies to "root" and exclude from trimming was literally to just iterate. Build, trim, test, add an assembly by reading the error message, and repeat.

This sample ASP.NET Core app will deploy cleanly on Zeit with the smallest image footprint as possible. https://github.com/shanselman/superzeit

Next I'll try an actual Microservice (as opposed to a complete website, which is what this is) and see how small I can get that. Such fun!

UPDATE: This technique works with "dotnet new webapi" as well and is about 73 megs per "docker images" and it's 34 megs when sent and squished through Zeit's "now" CLI.

Small services!

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.
© 2018 Scott Hanselman. All rights reserved.
     

How do you use System.Drawing in .NET Core?

Sep 12, 2018

Description:

I've been doing .NET image processing since the beginning. In fact I wrote about it over 13 years ago on this blog when I talked about Compositing two images into one from the ASP.NET Server Side and in it I used System.Drawing to do the work. For over a decade folks using System.Drawing were just using it as a thin wrapper over GDI (Graphics Device Interface) which were very old Win32 (Windows) unmanaged drawing APIs. We use them because they work fine.

.NET Conf: Join us this week! September 12-14, 2018 for .NET Conf! It's a FREE, 3 day virtual developer event co-organized by the .NET Community and Microsoft. Watch all the sessions here. Join a virtual attendee party after the last session ends on Day 1 where you can win prizes! Check out the schedule here and attend a local event in your area organized by .NET community influencers all over the world.

DotNetBotFor a while there was a package called CoreCompat.System.Drawing that was a .NET Core port of a Mono version of System.Drawing.

However, since then Microsoft has released System.Drawing.Common to provide access to GDI+ graphics functionality cross-platform.

There is a lot of existing code - mine included - that makes assumptions that .NET would only ever run on Windows. Using System.Drawing was one of those things. The "Windows Compatibility Pack" is a package meant for developers that need to port existing .NET Framework code to .NET Core. Some of the APIs remain Windows only but others will allow you to take existing code and make it cross-platform with a minimum of trouble.

Here's a super simple app that resizes a PNG to 128x128. However, it's a .NET Core app and it runs in both Windows and Linux (Ubuntu!)

using System;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.IO;

namespace imageresize
{
class Program
{
static void Main(string[] args)
{
int width = 128;
int height = 128;
var file = args[0];
Console.WriteLine($"Loading {file}");
using(FileStream pngStream = new FileStream(args[0],FileMode.Open, FileAccess.Read))
using(var image = new Bitmap(pngStream))
{
var resized = new Bitmap(width, height);
using (var graphics = Graphics.FromImage(resized))
{
graphics.CompositingQuality = CompositingQuality.HighSpeed;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.DrawImage(image, 0, 0, width, height);
resized.Save($"resized-{file}", ImageFormat.Png);
Console.WriteLine($"Saving resized-{file} thumbnail");
}
}
}
}
}

Here it is running on Ubuntu:

Resizing Images on Ubuntu

NOTE that on Ubuntu (and other Linuxes) you may need to install some native dependencies as System.Drawing sits on top of native libraries

sudo apt install libc6-dev
sudo apt install libgdiplus

There's lots of great options for image processing on .NET Core now! It's important to understand that this System.Drawing layer is great for existing System.Drawing code, but you probably shouldn't write NEW image management code with it. Instead, consider one of the great other open source options.

ImageSharp - A cross-platform library for the processing of image files; written in C# Compared to System.Drawing ImageSharp has been able to develop something much more flexible, easier to code against, and much, much less prone to memory leaks. Gone are system-wide process-locks; ImageSharp images are thread-safe and fully supported in web environments.

Here's how you'd resize something with ImageSharp:

using (Image<Rgba32> image = Image.Load("foo.jpg"))
{
image.Mutate(x => x
.Resize(image.Width / 2, image.Height / 2)
.Grayscale());
image.Save("bar.jpg"); // Automatic encoder selected based on extension.
} Magick.NET -A .NET library on top of ImageMagick SkiaSharp - A .NET wrapper on top of Google's cross-platform Skia library

It's awesome that there are so many choices with .NET Core now!

Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

The Extremely Promising State of Diabetes Technology in 2018

Sep 7, 2018

Description:

This blog post is an update to these two Diabetes Technology blog posts:

The Promising State of Diabetes Technology in 2016 The Sad State of Diabetes Technology in 2012

You might also enjoy this video of the talk I gave at WebStock 2018 on Solving Diabetes with an Open Source Artificial Pancreas*.

First, let me tell you that insulin is too expensive in the US.

Between 2002 and 2013, the price of insulin jumped, with the typical cost for patients increasing from about $40 a vial to $130.

Open Source Artificial Pancreas on iPhoneFor some of the newer insulins like the ones I use, I pay as much as $296 a bottle. I have a Health Savings Plan so this is often out of pocket until I hit the limit for the year.

People in America are rationing insulin. This is demonstrable fact. I've personally mailed extra insulin to folks in need. I've meet young people who lost their insurance at age 26 and have had to skip shots to save vials of insulin.

This is a problem, but on the technology side there's some extremely promising work happening, and it's we have really hit our stride in the last ten years.

I wrote the first Glucose Management system for the PalmPilot in 1998 called GlucoPilot and provided on the go in-depth analysis for the first time. The first thing that struck me was that the PalmPilot and the Blood Sugar Meter were the same size. Why did I need two devices with batteries, screens, buttons and a CPU? Why so many devices?

I've been told every year the a Diabetes Breakthrough is coming "in five years." It's been 25 years.

In 2001 I went on a trip across the country with my wife, an insulin pump and 8 PDAs (personal digital assistants, the "iPhones" of the time) and tried to manage my diabetes using all the latest wireless technology...this was the latest stuff 17 years ago. I had just moved from injections to an insulin pump. Even now in 2018 Insulin Pumps are expensive, mostly proprietary, and super expensive. In fact, many folks use insulin pumps in the states use out of warranty pumps purchased on Craigslist.

Fast forward to 2018 and I've been using an Open Source Artificial Pancreas for two years.

OpenAPS - Open Artificial Pancreas System. A platform for building a closed-loop with open tools. AndroidAPS - A branch of OpenAPS running on Android Loop/LoopKit - Open Source Artificial Pancreas running on the iPhone with a hardware bridge (RileyLink) to the pump. I run this pancreas, personally, and have for nearly 2 years. Watch Dana Lewis (the originator of OpenAPS) talk about OpenAPS at OSCON!

The results speak for themselves. While I do have bad sugars sometimes, and I do struggle, if you look at my blood work my HA1c (the long term measurement of "how I'm doing" shows non-diabetic levels. To be clear - I'm fully and completely Type 1 diabetic, I produce zero insulin of my own. I take between 40 and 50 Units of insulin every day, and have for the last 25 years...but I will likely die of old age.

Open Source Artificial Pancreas === Diabetes results pic.twitter.com/ZSsApTLRXq

— Scott Hanselman (@shanselman) September 10, 2018

This is significant. Why? Because historically diabetics die of diabetes. While we wait (or more accurately, #WeAreNotWaiting) for a biological/medical solution to Type 1 diabetes, the DIY (Do It Yourself) community is just doing it ourselves.

Building on open hardware, open software, and reverse-engineered protocols for proprietary hardware, the online diabetes community literally has their choice of open source pancreases in 2018! Who would have imagined it. You can choose your algorithms, your phone, your pump, your continuous glucose meter.

Today, in 2018, you can literally change the code and recompile a personal branch of your own pancreas.

Watch my 2010 YouTube video "I am Diabetic" as I walk you through the medical hardware (pumps, needles, tubes, wires) in managing diabetes day to day. Then watch my 2018 talk on Solving Diabetes with an Open Source Artificial Pancreas*.

I believe that every diabetic should be offered a pump, a continuous glucose meter, and trained on some kind of artificial pancreas. A cloud based reporting system has also been a joy. My wife and family can see my sugar in real time when I'm away. My wife has even called me overseas to wake me up when I was in a bad sugar situation.

Artificial Pancreas generations

As the closed-hardware and closed-software medical companies work towards their own artificial pancreases, the open source community feel those companies would better serve us by opening up their protocols, using standard Bluetooth ISO profiles and use security best practices.

Looking at the table above, the open source community is squarely in #4 and moving quickly into #5. But why did we have to do this ourselves? We got tired of waiting.

All in all, through open software and hardware, I can tell you that my life is SO MUCH BETTER than it was when I was first diagnosed. I figure we'll have this all figured out in about five years, right? ;)

THANK YOU!

MORE DIABETES READING Bridging Dexcom Share CGM Receivers and Nightscout Hacking Diabetes Visualizing your real-time blood sugar values in Git Diabetes Technology: Dexcom G5 CGM Review Introducing Web Tiles for Microsoft Band - My diabetes data on a Band! Diabetics: It's fun to say Bionic Pancreas but how about a reality check It's WAY too early to call this Insulin Pump an Artificial Pancreas

* Yes there are some analogies, stretched metaphors, and oversimplifications in this talk. This talk is an introduction to the space to the normally-sugared. If you are a diabetes expert you might watch and say...eh...ya, I mean, it kind of works like that. Please take the talk in in the thoughtful spirit it was intended.

Sponsor: Get home early, eat supper on time and coach your kids in soccer. Moving workloads to Azure just got easy with Azure NetApp Files. Sign up to Preview Azure NetApp Files!


© 2018 Scott Hanselman. All rights reserved.
     

Always Be Closing...Pull Requests

Sep 5, 2018

Description:

Always be closingI was looking at a Well Known Open Source Project on GitHub today. It had like 978 Pull Requests. A "PR" means "hey here's some code I did for your project, you can PULL it from here and merge it into your code!"

But these were Open Pull Requests. Pending. Limbo Pull Requests. Dating back to 2015.

Why do Pull Requests stay open?

Why do projects keep Pull Requests open? What's a reasonable amount of time? Here's a few thoughts.

PR as Call to Action PRs are a shout. They are HERE IS SOME CODE and they create work for the maintainer. They are needy things and require review and merging, but even worse, sometimes manual merging. Plus for folks new to Git and Open Source, asking them to "rebase on top of latest" may be enough for them to just give up. Fear of Closing If you close a PR without merging it, it's a rejection. It's a statement that this work isn't going to be used, and there's always a chance that the person who did the work will feel pretty bad about it. Abandoned Sometimes the originator of the PR disappears. The PR is effectively abandoned. These should be closed after a time. Opened so long they can't be merged The problem with PRs that are open for long is that they become impossible to merge. The cost of understanding whether they are still relevant plus resolving the merge conflicts might be higher than the value of the PR itself. Incorrectly created A PR originator may intent to change a single word (misspelling) but their PR changes CRs to LFs or Tabs to Spaces, it's a hassle. Formatting It's generally considered poor form to send a PR out of the blue where one just ran a linter or formatter. If the project wanted that done they'd ask for it. Totally not aligned with Roadmap If a PR shows up without context or communication, it may not be aligned with the direction of the project. Surprise PR Unfortunately some PRs show up out of the blue with major changes, file moves, or no context. If a PR wasn't asked for, or if a PR wasn't requested, or borne of an Issue, you'll likely have trouble pushing it through.

Thanks to Jon and Immo for their thoughts on this (likely incomplete) list. Jess Frazelle has a great post on "The Art of Closing" that I just found, and it includes a glorious gif from Glengarry Glen Ross where Always Be Closing comes from (warning, clip has dated and offensive language).

Jess suggests a few ways to Always Be Closing.

Two things that can help make your open source project successful AND stay tidy!

including a CONTRIBUTING.md GitHub has some good guidance https://help.github.com/articles/setting-up-your-project-for-healthy-contributions/ and most of the dotnet repos have some decent contribution guidelines. I LOVE Gina Häußge's Contributing.md on her open source project "OctoPrint." using Pull Request templates that give clear guidance on how to submit a successful pull request. https://help.github.com/articles/creating-a-pull-request-template-for-your-repository Use bots to test and build PRs, sign CLAs (Contributor License Agreements) and move the ball forward.

What do you think? Why do PRs stay open?

Sponsor: Get home early, eat supper on time and coach your kids in soccer. Moving workloads to Azure just got easy with Azure NetApp Files. Sign up to Preview Azure NetApp Files!


© 2018 Scott Hanselman. All rights reserved.
     

Interesting bugs - MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible. Image is too small.

Aug 30, 2018

Description:

I got a very strange warning recently when building a .NET Core app with "dotnet build."

MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible.
Image is too small.

Interesting Bug used under CC https://flic.kr/p/4SpmL6Eek! It's clear, in that something is "too small" but what? A file I guess? Maybe it's the wrong size?

The error code is MSB3246 which is nice and googleable/searchable but it was confusing because I couldn't figure our what file specifically. It just felt vague.

BUT!

I had recently been overclocking my machine (overly aggressively, gulp, about 40%) and had a very nasty hard reboot. As a result I had a few dozen files get orphaned - specifically the files were zero'ed out! Zero is small, right?

Turns out you can pass parameters over to MSBuild from "dotnet build" and see what MSBuild is doing internally. For example, you could

/fileLoggerParameters:verbosity=diagnostic

but that's long. So how about:

dotnet build /flp:v=diag

Cool. What deep logging do I see now?

Primary reference "deliberately.zero.bytes.dll". (TaskId:41)
13:36:52.397 1:7>C:\Program Files\dotnet\sdk\2.1.400\Microsoft.Common.CurrentVersion.targets(2110,5): warning MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible. Image is too small. [S:\work\zero-byte-ref\zero-byte-ref.csproj]
Resolved file path is "S:\work\zero-byte-ref\deliberately.zero.bytes.dll". (TaskId:41)
Reference found at search path location "{RawFileName}". (TaskId:41)

Now with "verbose" turned on I can see that one of the references is zero'ed out/corrupted/bad. I reinstalled .NET Core in my case and doubled checked all the DLLs/Assemblies that I was bringing in - I also ran chkdsk /f - and I was back in business!

I hope this helps someone who might stumble on error MSB3246 and wonder what's up.

Even better, thanks to Rainer Sigwald who filed a bug against MSBuild to update the error message to be more clear. In the future I'll be able to debug this without changing verbosity!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.
© 2018 Scott Hanselman. All rights reserved.
     

Improvements on ASP.NET Core deployments on Zeit's now.sh and making small container images

Aug 29, 2018

Description:

Back in March of 2017 I blogged about Zeit and their cool deployment system "now." Zeit will take any folder and deploy it to the web easily. Better yet if you have a Dockerfile in that folder as Zeit will just use that for the deployment.

image

Zeit's free Open Source account has a limit of 100 megs for the resulting image, and with the right Dockerfile started ASP.NET Core apps are less than 77 megs. You just need to be smart about a few things. Additionally, it's running in a somewhat constrained environment so ASP.NET's assumptions around FileWatchers can occasionally cause you to see errors like

at System.IO.FileSystemWatcher.StartRaisingEvents()
Unhandled Exception: System.IO.IOException:
The configured user limit (8192) on the number of inotify instances has been reached.
at System.IO.FileSystemWatcher.StartRaisingEventsIfNotDisposed(

While this environment variable is set by default for the "FROM microsoft/dotnet:2.1-sdk" Dockerfile, it's not set at runtime. That's dependent on your environment.

Here's my Dockerfile for a simple project called SuperZeit. Note that the project is structured with a SLN file, which I recommend.

Let me call our a few things.

First, we're doing a Multi-stage build here. The SDK is large. You don't want to deploy the compiler to your runtime image! Second, the first copy commands just copy the sln and the csproj. You don't need the source code to do a dotnet restore! (Did you know that?) Not deploying source means that your docker builds will be MUCH faster as Docker will cache the steps and only regenerate things that change. Docker will only run dotnet restore again if the solution or project files change. Not the source. Third, we are using the aspnetcore-runtime image here. Not the dotnetcore one. That means this image includes the binaries for .NET Core and ASP.NET Core. We don't need or want to include them again. If you were doing a publish with a the -r switch, you'd be doing a self-contained build/publish. You'd end up copying TWO .NET Core runtimes into a container! That'll cost you another 50-60 megs and it's just wasteful. If you want to do that Go explore the very good examples on the .NET Docker Repro on GitHub https://github.com/dotnet/dotnet-docker/tree/master/samples Optimizing Container Size .NET Core Alpine Docker Sample - This sample builds, tests, and runs an application using Alpine. .NET Core self-contained Sample - This sample builds and runs an application as a self-contained application. Finally, since some container systems like Zeit have modest settings for inotify instances (to avoid abuse, plus most folks don't use them as often as .NET Core does) you'll want to set ENV DOTNET_USE_POLLING_FILE_WATCHER=true which I do in the runtime image.

So starting from this Dockerfile:

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
RUN dotnet publish -c Release -o out

FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Remember the layers of the Docker images, as if they were a call stack:

Your app's files ASP.NET Core Runtime .NET Core Runtime .NET Core native dependencies (OS specific) OS image (Alpine, Ubuntu, etc)

For my little app I end up with a 76.8 meg image. If want I can add the experimental .NET IL Trimmer. It won't make a difference with this app as it's already pretty simple but it could with a larger one.

BUT! What if we changed the layering to this?

Your app's files along with a self-contained copy of ASP.NET Core and .NET Core .NET Core native dependencies (OS specific) OS image (Alpine, Ubuntu, etc)

Then we could do a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example.

See how he's doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy's csproj).

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Now at this point, I'd want to see how small the IL Linker made my ultimate project. The goal is to be less than 75 megs. However, I think I've hit this bug so I will have to head to bed and check on it in the morning.

The project is at https://github.com/shanselman/superzeit and you can just clone and "docker build" and see the bug.

However, if you check the comments in the Docker file and just use the a "FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime" it works fine. I just think I can get it even smaller than 75 megs.

Talk so you soon, Dear Reader! (I'll update this post when I find out about that bug...or perhaps my bug!)

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Decoding an SSH Key from PEM to BASE64 to HEX to ASN.1 to prime decimal numbers

Aug 24, 2018

Description:

I'm reading a new chapter of The Imposter's Handbook: Season 2 that Rob and I are working on. He's digging into the internals of what's exactly in your SSH Key.

Decoding a certificate

I generated a key with no password:

ssh-keygen -t rsa -C scott@myemail.com

Inside the generated file is this text, that we've all seen before but few have cracked open.

-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAtd8As85sOUjjkjV12ujMIZmhyegXkcmGaTWk319vQB3+cpIh
Wu0mBke8R28jRym9kLQj2RjaO1AdSxsLy4hR2HynY7l6BSbIUrAam/aC/eVzJmg7
qjVijPKRTj7bdG5dYNZYSEiL98t/+XVxoJcXXOEY83c5WcCnyoFv58MG4TGeHi/0
coXKpdGlAqtQUqbp2sG7WCrXIGJJdBvUDIQDQQ0Isn6MK4nKBA10ucJmV+ok7DEP
kyGk03KgAx+Vien9ELvo7P0AN75Nm1W9FiP6gfoNvUXDApKF7du1FTn4r3peLzzj
50y5GcifWYfoRYi7OPhxI4cFYOWleFm1pIS4PwIDAQABAoIBAQCBleuCMkqaZnz/
6GeZGtaX+kd0/ZINpnHG9RoMrosuPDDYoZZymxbE0sgsfdu9ENipCjGgtjyIloTI
xvSYiQEIJ4l9XOK8WO3TPPc4uWSMU7jAXPRmSrN1ikBOaCslwp12KkOs/UP9w1nj
/PKBYiabXyfQEdsjQEpN1/xMPoHgYa5bWHm5tw7aFn6bnUSm1ZPzMquvZEkdXoZx
c5h5P20BvcVz+OJkCLH3SRR6AF7TZYmBEsBB0XvVysOkrIvdudccVqUDrpjzUBc3
L8ktW3FzE+teP7vxi6x/nFuFh6kiCDyoLBhRlBJI/c/PzgTYwWhD/RRxkLuevzH7
TU8JFQ9BAoGBAOIrQKwiAHNw4wnmiinGTu8IW2k32LgI900oYu3ty8jLGL6q1IhE
qjVMjlbJhae58mAMx1Qr8IuHTPSmwedNjPCaVyvjs5QbrZyCVVjx2BAT+wd8pl10
NBXSFQTMbg6rVggKI3tHSE1NSdO8kLjITUiAAjxnnJwIEgPK+ljgmGETAoGBAM3c
ANd/1unn7oOtlfGAUHb642kNgXxH7U+gW3hytWMcYXTeqnZ56a3kNxTMjdVyThlO
qGXmBR845q5j3VlFJc4EubpkXEGDTTPBSmv21YyU0zf5xlSp6fYe+Ru5+hqlRO4n
rsluyMvztDXOiYO/VgVEUEnLGydBb1LwLB+MVR2lAoGAdH7s7/0PmGbUOzxJfF0O
OWdnllnSwnCz2UVtN7rd1c5vL37UvGAKACwvwRpKQuuvobPTVFLRszz88aOXiynR
5/jH3+6IiEh9c3lattbTgOyZx/B3zPlW/spYU0FtixbL2JZIUm6UGmUuGucs8FEU
Jbzx6eVAsMojZVq++tqtAosCgYB0KWHcOIoYQUTozuneda5yBQ6P+AwKCjhSB0W2
SNwryhcAMKl140NGWZHvTaH3QOHrC+SgY1Sekqgw3a9IsWkswKPhFsKsQSAuRTLu
i0Fja5NocaxFl/+qXz3oNGB56qpjzManabkqxSD6f8o/KpeqryqzCUYQN69O2LG9
N53L9QKBgQCZd0K6RFhhdJW+Eh7/aIk8m8Cho4Im5vFOFrn99e4HKYF5BJnoQp4p
1QTLMs2C3hQXdJ49LTLp0xr77zPxNWUpoN4XBwqDWL0t0MYkRZFoCAG7Jy2Pgegv
uOuIr6NHfdgGBgOTeucG+mPtADsLYurEQuUlfkl5hR7LgwF+3q8bHQ==
-----END RSA PRIVATE KEY-----

The private key is an ASN.1 (Abstract Syntax Notation One) encoded data structure. It's a funky format but it's basically a packed format with the ability for nested trees that can hold booleans, integers, etc.

However, ASN.1 is just the binary packed "payload." It's not the "container." For example, there are envelopes and there are letters inside them. The envelope is the PEM (Privacy Enhanced Mail) format. Such things start with ----- BEGIN SOMETHING ----- and end with ----- END SOMETHING ------. If you're familiar with BASE64, your spidey sense may tell you that this is a BASE64 encoded file. Not everything that's BASE64 turns into a friendly ASCII string. This turns into a bunch of bytes you can view in HEX.

We can first decode the PEM file into HEX. Yes, I know there's lots of ways to do this stuff at the command line, but I like showing and teaching using some of the many encoding/decoding websites and utilities there are out there. I also love using https://cryptii.com/ for these things as you can build a visual pipeline.

308204A40201000282010100B5DF00B3CE6C3948E3923575DAE8
CC2199A1C9E81791C9866935A4DF5F6F401DFE7292215AED2606
47BC476F234729BD90B423D918DA3B501D4B1B0BCB8851D87CA7
63B97A0526C852B01A9BF682FDE57326683BAA35628CF2914E3E
DB746E5D60D65848488BF7CB7FF97571A097175CE118F3773959
C0A7CA816FE7C306E1319E1E2FF47285CAA5D1A502AB5052A6E9
DAC1BB582AD7206249741BD40C8403410D08B27E8C2B89CA040D
74B9C26657EA24EC310F9321A4D372A0031F9589E9FD10BBE8EC
FD0037BE4D9B55BD1623FA81FA0DBD45C3029285EDDBB51539F8
AF7A5E2F3CE3E74CB919C89F5987E84588BB38F87123870560E5
snip

This ASN.1 JavaScript decoder can take the HEX and parse it for you. Or you can that ASN.1 packed format at the *nix command line and see that there's nine big integers inside (I trimmed them for this blog).

openssl asn1parse -in notreal
0:d=0 hl=4 l=1188 cons: SEQUENCE
4:d=1 hl=2 l= 1 prim: INTEGER :00
7:d=1 hl=4 l= 257 prim: INTEGER :B5DF00B3CE6C3948E3923575DAE8CC2199A1C9E81791C9866935A4DF5F6F401DFE7292215
268:d=1 hl=2 l= 3 prim: INTEGER :010001
273:d=1 hl=4 l= 257 prim: INTEGER :8195EB82324A9A667CFFE867991AD697FA4774FD920DA671C6F51A0CAE8B2E3C30D8A1967
534:d=1 hl=3 l= 129 prim: INTEGER :E22B40AC22007370E309E68A29C64EEF085B6937D8B808F74D2862EDEDCBC8CB18BEAAD48
666:d=1 hl=3 l= 129 prim: INTEGER :CDDC00D77FD6E9E7EE83AD95F1805076FAE3690D817C47ED4FA05B7872B5631C6174DEAA7
798:d=1 hl=3 l= 128 prim: INTEGER :747EECEFFD0F9866D43B3C497C5D0E3967679659D2C270B3D9456D37BADDD5CE6F2F7ED4B
929:d=1 hl=3 l= 128 prim: INTEGER :742961DC388A184144E8CEE9DE75AE72050E8FF80C0A0A38520745B648DC2BCA170030A97
1060:d=1 hl=3 l= 129 prim: INTEGER :997742BA4458617495BE121EFF68893C9BC0A1A38226E6F14E16B9FDF5EE072981790499E

Per the spec the format is this:

An RSA private key shall have ASN.1 type RSAPrivateKey:

RSAPrivateKey ::= SEQUENCE {
version Version,
modulus INTEGER, -- n
publicExponent INTEGER, -- e
privateExponent INTEGER, -- d
prime1 INTEGER, -- p
prime2 INTEGER, -- q
exponent1 INTEGER, -- d mod (p-1)
exponent2 INTEGER, -- d mod (q-1)
coefficient INTEGER -- (inverse of q) mod p }

I found the description for how RSA works in this blog post very helpful as it uses small numbers as examples. The variable names here like p, q, and n are agreed upon and standard.

The fields of type RSAPrivateKey have the following meanings:
o version is the version number, for compatibility
with future revisions of this document. It shall
be 0 for this version of the document.
o modulus is the modulus n.
o publicExponent is the public exponent e.
o privateExponent is the private exponent d.
o prime1 is the prime factor p of n.
o prime2 is the prime factor q of n.
o exponent1 is d mod (p-1).
o exponent2 is d mod (q-1).
o coefficient is the Chinese Remainder Theorem
coefficient q-1 mod p.

Let's look at that first number q, the prime factor p of n. It's super long in Hexadecimal.

747EECEFFD0F9866D43B3C497C5D0E3967679659D2C270B3D945
6D37BADDD5CE6F2F7ED4BC600A002C2FC11A4A42EBAFA1B3D354
52D1B33CFCF1A3978B29D1E7F8C7DFEE8888487D73795AB6D6D3
80EC99C7F077CCF956FECA5853416D8B16CBD89648526E941A65
2E1AE72CF0511425BCF1E9E540B0CA23655ABEFADAAD028B

That hexadecimal number converted to decimal is this long ass number. It's 308 digits long!

22959099950256034890559187556292927784453557983859951626187028542267181746291385208056952622270636003785108992159340113537813968453561739504619062411001131648757071588488220532539782545200321908111599592636973146194058056564924259042296638315976224316360033845852318938823607436658351875086984433074463158236223344828240703648004620467488645622229309082546037826549150096614213390716798147672946512459617730148266423496997160777227482475009932013242738610000405747911162773880928277363924192388244705316312909258695267385559719781821111399096063487484121831441128099512811105145553708218511125708027791532622990325823

It's hard work to prove this number is prime but there's a great Integer Factorization Calculator that actually uses WebAssembly and your own local CPU to check such things. Expect to way a long time, sometimes until the sun death of the universe. ;)

Rob and I are finding it really cool to dig just below the surface of common things we look at all the time. I have often opened a key file in a text file but never drawn a straight and complete line through decoding, unpacking, decoding, all the way to a math mathematical formula. I feel I'm filling up some major gaps in my knowledge!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

How do you even know this crap?

Aug 22, 2018

Description:

Imposter's HandbookThis post won't be well organized so lower your expectations first. When Rob Conery first wrote "The Imposter's Handbook" I was LOVING IT. It's a fantastic book written for imposters by an imposter. Remember, I'm the original phony.

Now he's working on The Imposter's Handbook: Season 2 and I'm helping. The book is currently in Presale and we're releasing PDFs every 2 to 3 weeks. Some of the ideas from the book will come from blog posts like or similar to this one. Since we are using Continuous Delivery and an Iterative Process to ship the book, some of the blog posts (like this one) won't be fully baked until they show up in the book (or not). See how I equivocated there? ;)

The next "Season" of The Imposter's Handbook is all about the flow of information. Information flowing through encoding, encryption, and transmission over a network. I'm also interested in the flow of information through one's brain as they move through the various phases of being a developer. Bear with me (and help me in the comments!).

I was recently on a call with two other developers, and it would be fair that we were of varied skill levels. We were doing some HTML and CSS work that I would say I'm competent at, but by no means an expert. Since our skill levels didn't fall on a single axis, we'd really we'd need some Dungeons & Dragon's Cards to express our competencies.

D&D Cards from Battle Grip

I might be HTML 8, CSS 6, Computer Science 9, Obscure Trivia 11, for example.

We were asked to make a little banner with some text that could be later closed with some iconography that would represent close/dismiss/go away.

One engineer suggested "Here's some text + ICON.PNG" The next offered a more scalable option with "Here's some text + ICON.SVG"

Both are fine ideas that would work, while perhaps later having DPI or maintenance issues, but truly, perfectly cromulent ideas.

I have never been given this task, I am not a designer, and I am a mediocre front-end person. I asked what they wanted it to look like and they said "maybe a square with an X or a circle with an X or a circle with a line."

I offered, you know, there MUST be a Unicode Glyph for that. I mean, there's one for poop." Apparently I say poop in business meetings more than any other middle manager at the company, but that's fodder for another blog post.

We searched and lo and behold we found ☒ and ⊝ and simply added them to the end of the string. They scale visibly, require no downloads or extra dependencies, and can be colored and styled nicely because they are text.

One of the engineers said "how do you even know this crap?" I smiled and shrugged and we moved on to the doing.

To be clear, this post isn't self-congratulatory. Perhaps you had the same idea. This interaction was all of 10 minutes long. But I'm interested in the HOW did I know this? Note that I didn't actually KNOW that these glyphs existed. I knew only that they SHOULD exist. They MUST.

How many times have you been coding and said "You know, there really must be a function/site/tool that does x/y/z?" All the time, right? You don't know the answers but you know someone must have AND must have solved it in a specific way such that you could find it. A new developer doesn't have this intuition - this sense of technical smell - yet.

How is technical gut and intuition and smell developed? Certainly by doing, by osmosis, by time, by sleeping, and waking, and doing it again.

I think it's exposure. It's exposure to a diverse set of technical problems that all build on a solid base of fundamentals.

Rob and I are going to try to expand on how this technical confidence gets developed in The Imposter's Handbook: Season 2 as topics like Logic, Binary and Logical Circuits, Compression and Encoding, Encryption and Cryptanalysis, and Networking and Protocols are discussed. But I want to also understand how/if/when these topics and examples excite the reader...and most importantly do they provide the reader with that missing Tetris Piece of Knowledge that moves you from a journeyperson developer to someone who can more confidently wear the label Computer Science 9, Obscure Trivia 11.

via GIPHY

What do you think? Sound off in the comments and help me and Rob understand!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Upgrading an existing .NET project files to the lean new CSPROJ format from .NET Core

Aug 17, 2018

Description:

Evocative random source code photoIf you've looked at csproj (C# (csharp) projects) in the past in a text editor you probably looked away quickly. They are effectively MSBuild files that orchestrate the build process. Phrased differently, a csproj file is an instance of an MSBuild file.

In Visual Studio 2017 and .NET Core 2 (and beyond) the csproj format is MUCH MUCH leaner. There's a lot of smart defaults, support for "globbing" like **/*.cs, etc and you don't need to state a bunch of obvious stuff. Truly you can take earlier msbuild/csproj files and get them down to a dozen lines of XML, plus package references. PackageReferences (references to NuGet packages) should be moved out of packages.config and into the csproj.  This lets you manage all project dependencies in one place and gives you and uncluttered view of top-level dependencies.

However, upgrading isn't as simple as "open the old project file and have VS automatically migrate you."

You have some options when migrating to .NET Core and the .NET Standard.

First, and above all, run the .NET Portability Analyzer and find out how much of your code is portable. Then you have two choices.

Great a new project file with something like "dotnet new classlib" and then manually get your projects building from the top (most common ancestor) project down Try to use an open source 3rd party migration tool

Damian on my team recommends option one - a fresh project - as you'll learn more and avoid bringing cruft over. I agree, until there's dozens of projects, then I recommend trying a migration tool AND then comparing it to a fresh project file to avoid adding cruft. Every project/solution is different, so expect to spend some time on this.

The best way to learn this might be by watching it happen for real. Wade from Salesforce was tasked with upgrading his 4+ year old .NET Framework (Windows) based SDK to portable and open source .NET Core. He had some experience building for older versions of Mono and was thoughtful about not calling Windows-specific APIs so he knows the code is portable. However he needs to migrate the project files and structure AND get the Unit Tests running with "dotnet test" and the command line.

I figured I'd give him a head start by actually doing part of the work. It's useful to do this because, frankly, things go wrong and it's not pretty!

I started with Hans van Bakel's excellent CsProjToVS2017 global tool. It does an excellent job of getting your project 85% of the way there. To be clear, don't assume anything and not every warning will apply to you. You WILL need to go over every line of your project files, but it is an extraordinarily useful tool. If you have .NET Core 2.1, install it globally like this:

dotnet tool install Project2015To2017.Cli --global

Then its called (unfortunately) with another command "csproj-to-2017" and you can pass in a solution or an individual csproj.

After you've done the administrivia of the actual project conversion, you'll also want to make educated decisions about the 3rd party libraries you pull in. For example, if you want to make your project cross-platform BUT you depend on some library that is Windows only, why bother trying to port? Well, many of your favorite libraries DO have "netstandard" or ".NET Standard" versions. You'll see in the video below how I pull Wade's project's reference forward with a new version of JSON.NET and a new NuUnit. By the end we are building and the command line and running tests as well with code coverage.

Please head over to my YouTube and check it out. Note this happened live and spontaneously plus I had a YouTube audience giving me helpful comments, so I'll address them occasionally.

LIVE: Upgrading an older .NET SDK to .NET Core and .NET Standard

If you find things like this useful, let me know in the comments and maybe I'll do more of them. Also, do you think things like this belong on the Visual Studio Twitch Channel? Go follow my favs on Twitch CSharpFritz and Noopkat for more live coding fun!

Friend of the Blog: Want to learn more about .NET for free? Join us at DotNetConf! It's a free virtual online community conference September 12-14, 2018. Head over to https://www.dotnetconf.net to learn more and for a Save The Date Calendar Link.


© 2018 Scott Hanselman. All rights reserved.
     

Azure Application Insights warned me of failed dependent requests on my site

Aug 15, 2018

Description:

I've been loving Application Insights ever since I hooked it up to my Podcast Site. Application Insights is stupid cheap and provides an unreal number of insights into what's going on in your site. I hooked it up and now I have a nice dashboard showing what's up. It's pretty healthy.

Lovely graphics showing HEALTHY websites

Here's an interesting view that shows the Availability Test that's checking my site as well as outbound calls (there isn't a lot as I cache aggressively) to SimpleCast where I host my shows.

A chart showing 100% availability

Availability is important, of course, so I set up some tests from a number of locations. I don't want the site to be down in Brazil but up in France, for example.

However, I got an email a week ago that said my site had a sudden rise in failures. Here's the thing, though. When I set up a web test I naively thought I was setting up a "ping." You know, a knock on the door. I figured if the WHOLE SITE was down, they'd tell me.

Here's my availability for today, along with timing from a bunch of locations world wide.

A nice green availability chart

Check out this email. The site is fine; that is, the primary requests didn't fail. But dependent request did fail! Application Insights noticed that an image referenced on the home page was suddenly a 404! Why suddenly? Because I put the wrong date and time for an episode and it auto-published before I had the guest's headshot!

I wouldn't have noticed this missing image until a user emailed me, so I was impressed that Application Insights gave me the heads up.

1 dependant request failed

Here is the chart for that afternoon when I published a bad show. Note that the site is technically up (it was) but a dependent request (a request after the main GET) failed.

Some red shows my site isn't very available

This is a client side failure, right? An image didn't load and it notified me. Cool. I can (and do) also instrument the back end code. Here you can see someone keeps sending me a PUT request, perhaps trying to poke at my site. By the way, random PUT person has been doing this for months.

I can also see slowest requests and dig as deep as I want. In fact I did a whole video on digging into Azure Application Insights that's up on YouTube.

A rogue PUT request

I've been using Application Insights for maybe a year or two now. Its depth continues to astound me. I KNOW I'm not using it to its fullest and I love that I'm still surprised by it.

Friend of the Blog: Want to learn more about .NET for free? Join us at DotNetConf! It's a free virtual online community conference September 12-14, 2018. Head over to https://www.dotnetconf.net to learn more and for a Save The Date Calendar Link.


© 2018 Scott Hanselman. All rights reserved.
     

Building the Ultimate Developer PC 3.0 - The Parts List for my new computer, IronHeart

Aug 10, 2018

Description:

Ironheart is my new i9 PCIt's been 7 years since the last time I built "The Ultimate Developer PC 2.0," and over 11 since the original Ultimate Developer PC that Jeff Atwood built with for me. That last PC was $3000 and well, frankly, that's a heck of a lot of money. Now, I see a lot of you dropping $2k and $3k on MacBook Pros and Surfaces without apparently sweating it too much but I expect that much money to last a LONG TIME.

Do note that while my job does give me a laptop for work purposes every 3 years, my desktop is my own, paid for with my own money and not subsidized by my employer in any way. This PC is mine.

I wrote about money and The Programmer's Priorities in my post on Brain, Bytes, Back, and Buns. As Developer we spend a lot of time looking at monitors, sitting in chairs, using computers, and sleeping. It stands to reason we should should invest in good chairs, good monitors and PCs, and good beds. That also means good mice and keyboards, of course.

Was that US$3000 investment worth it? Absolutely. I worked on my PC2.0 nearly every day for 7 years. That's ~2500 days at about $1.25 a day if you consider upgradability.

Continuous PC Improvement via reasonably priced upgrades

How could I use the same PC for 7 years? Because it's modular.

Hard Drive - I upgraded 3 years back to a 512 gig Samsung 850 SSD and it's still a fantastic drive at only about $270 today. This kept my machine going and going FAST. Video Card - I found a used NVidia 1070 on Craigslist for $250, but they are $380 new. A fantastic card that can do VR quite nicely, but for me, it ran three large monitors for years. Monitors - I ran a 30" Dell as my main monitor that I bought used nearly 10 years ago. It does require a DisplayPort to Dual-Link DVI active adapter but it's still an amazing 2560x1600 monitor even today. Memory - I started at 16 gigs and upgraded to 24 gigs when memory got cheaper.

All this adds up to me running the same first generation i7 processor up until 2018. And frankly, I probably could have gone another 3-5 years happily.

So why upgrade? I was gaming more and more as well as using my HTC Vive Pro and while the 1070 was great (although always room for improvement) I was pushing the original Processor pretty hard. On the development side, I have been running somewhat large distributed systems with Docker for Windows and Kubernetes, again, pushing memory and CPU pretty hard.

Ultimately however, price/performance for build-your-own PCs got to a reasonable place plus the ubiquity of 4k displays at reasonable costs made me think I could build a machine that would last me a minimum of 5 years, if not another 7.

Specifications

I bought my monitors from Dell directly and the PC parts from NewEgg.com. I named my machine IRONHEART after Marvel's Riri Williams.

Intel Core i9-7900X 10-Core 3.3 Ghz Desktop Processor - I like this processor for a few reasons. Yes, I'm an Intel fan, but I like that it has 44 PCI Express lanes (that's a lot) which means given I'm not running SLI with my video card, I'll have MORE than enough bandwidth for any peripherals I can throw at this machine. Additionally, it's caching situation is nuts. There's 640k L1, 10 MEGS L2, and 13.8 MEGS L3. 640 ought to be enough for anyone, right? ;) It's also got 20 logical processors plus Intel Turbo Boost Max that will move specific cores to 4.5GHz as needed, up from the base 3.3Ghz freq. It can also support up to 128 GB of RAM, although I'll start with 32gigs it's nice to have the room to grow. 288-pin DDR4 3200Mhz (PC4 25600) Memory  4 x 8G - These also have a fun lighting effect, and since my case is clear why not bling it out a little? ASUS ROG STRIX LGA2066 X299 ATX Motherboard - Good solid board with built in BT and Wifi, an M.2 heatsink included, 3x PCIe 3.0 x16 SafeSlots (supports triple @ x16/x16/x8), 1x PCIe 3.0 x4, 2x PCIe 3.0 x1 and a Max of 128 gigs of RAM. It also has 8x USB 3.1s and a USB C which is nice. Corsair Hydro Series H100i V2 Extreme Performance Water/Liquid CPU Cooler - My last PC had a heat sink you could see from space. It was massive and unruly. This Cooler/Fan combo mounts cleanly and then sits at the top of the case. It opens up a TON of room and looks fantastic. I really like everything Corsair does. WD Black 512GB Performance SSD - M.2 2280 PCIe NVMe Solid State Drive - It's amazing how cheap great SSDs are and I felt it was time to take it to the next level and try M.2 drives. M.2 is the "next generation form factor" for drives and replaces mSATA. M.2 SSDs are tiny and fast. This drive can do as much as 2gigs a second as much as 3x the speed of a SATA SSD. And it's cheap. CORSAIR Crystal 570X RGB Tempered Glass, Premium ATX Mid Tower Case, White - I flipping love this case. It's white and clear, but mostly clear. The side is just a piece of tempered glass. There's three RGB LED fans in the front (along with the two I added on the top from the cooler, and one more in the back) and they all are software controllable. The case also has USB ports on top which is great since it's sitting under my clear glass desk. It is very well thought out and includes many cable routing channels so your cables can be effectively invisible. Highly recommended.
Clear white case The backside of the clear white corsair case Corsair 120mm RGB LED Fans - Speaking of fans, I got this three pack bringing the total 120mm fan count to 6 (7 if you count the GPU fan that's usually off) TWO Anker 10 Port 60W USB hubs. I have a Logitech Brio 4k camera, a Peavey PV6 USB Mixer, and a bunch of other USB3 devices like external hard drives, Xbox Wireless Adapter and the like so I got two of these fantastic Hubs and double-taped them to the desk above the case. ASUS ROG GeForce GTX 1080 Ti 11 gig Video Card - This was arguably over the top but in this case I treated myself. First, I didn't want to ever (remember my 5 year goal) sweat video perf. I am/was very happy with my 1070 which is less than half the price, but as I've been getting more into VR, the NVidia 1070 can struggle a little. Additionally, I set the goal to drive 3 4k monitors at 60hz with zero issues, and I felt that the 1080 was the a solid choice. THREE Dell Ultra HD 4k Monitors P2715Q 27" - My colleague Damian LOVES these monitors. They are an excellent balance in size and cost and are well-calibrated from the factory. They are a full 4k and support DisplayPort and HDMI 2.0. Remember that my NVidia card has 2 DisplayPorts and 2 HDMI ports but I want to drive 3 monitors and 1 Vive Pro? I run the center Monitor off DisplayPort and the left and right off HDMI 2.0. NOTE: The P2415Q and P2715Q both support HDMI 2.0 but it's not enabled from the factory. You'll need to enable HDMI 2.0 in the menus (read the support docs) and use a high-speed HDMI cable. Otherwise you'll get 4k at 30hz and that's really a horrible experience. You want 60hz for work at least. NOTE: When running the P2715Q off DisplayPort from an NVidia card you might initially get an output color format of YCbCr 4:2:2 which will make the anti-aliased text have a colored haze while the HDMI 2.0 displays look great with RGB color output. You'll need to go into the menus of the display itself and set the Input Color Format to RGB *and* also into the NVidia display settings after turning the monitor and and off to get it to stick. Otherwise you'll find the NVidia Control Panel will reset to the less desirable YCbCr422 format causing one of your monitors to look different than the others. Last note, sometimes Windows will say that a DisplayPort monitor is running at 59Hz. That's almost assuredly a lie. Believe your video card.
Three monitors all running 4k 60hz What about perf?

Developers develop, right? A nice .NET benchmark is to compile Orchard Core both "cold" and "warm." I use .NET Core 2.1 downloaded from http://www.dot.net

Orchard is a fully-featured CMS with 143 projects loaded into Visual Studio. MSBUILD and .NET Core in 2.1 support both parallel and incremental builds.

A warm build of Orchard Core on IRONHEART takes just under 10 seconds. UPDATE: With overclock and tuing it builds in 7.39 seconds. My Surface Pro 3 builds it warm in 62 seconds. A totally cold build (after a dotnet clean) on IRONHEART takes 33.3 seconds. UPDATE: With overclock and tuning it builds in 21.2 seconds. My Surface Pro 3 builds it cold in 2.4 minutes.

Additionally, the CPUs in this case weren't working at full speed very long. This may be as fast as these 143 projects can be built. Note also that Visual Studio/MSBuild will use as many processors as it your system can handle. In this case it's using 20 procs.
MSBuild building Orchard Core across 20 logical processors

I can choose to constrain things if I think the parallelism is working against me, for example here I can try just with 4 processors. In my testing it doesn't appear that spreading the build across 20 processors is a problem. I tried just 10 (physical processors) and it builds in 12 seconds. With 20 processors (10 with hyperthreading, so 20 logical) it builds in 9.6 seconds so there's clearly a law of diminishing returns here.

dotnet build /maxcpucount:4

Building Orchard Core in 10 seconds

Regardless, My podcast site builds in less than 2 seconds on my new machine which makes me happy. I'm thrilled with this new machine and I hope it lasts me for many years.

PassMark

I like real world benchmarks, like building massive codebases and reading The Verge with an AdBlocker off, but I did run PassMark.

Passmark 98% percentile UPDATE with Overclocking

I did some modest overclocking to about 4.5Gz as well as some Fan Control and temperature work, plus I'm trying it with Intel Turbo Boost Max turned off and here's the updated Passmark taking the the machine into the 99% percentile.

Overall 6075 -> 7285 CPU 19842 -> 23158 Disk 32985 -> 42426 2D Mark 724 -> 937 (not awesome) 3D Mark (originally failed with a window resize) -> 15019 Memory 2338 -> 2827 (also not awesome) I still feel I may be doing something wrong here with memory. If I turn XMP on for memory those scores go up but then the CPU goes to heck. Passmark of 7285, now 99% percentile Now you!

Why don't you go get .NET Core 2.1 and clone Orchard Core from https://github.com/OrchardCMS/OrchardCore and run this in PowerShell

measure-command { dotnet build }

and let me know in the comments how fast your PC is with both cold and warm builds!

GOTCHAS: Some of you are telling me you're getting warm builds of 4 seconds. I don't believe you! ;) Be sure to run without "measure-command" and make sure you're not running a benchmark on a failed build! My overlocked BUILD now does 7.39 seconds warm.

NOTE: I  have an affiliate relationship with NewEgg and Amazon so if you use my links to buy something I'll make a small percentage and you're supporting this blog! Thanks!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

11 essential characteristics for being a good technical advocate or interviewer

Aug 8, 2018

Description:

14265784357_5b2773e123_oI was talking to my friend Rob Caron today. He produces Azure Friday with me - it's our weekly video podcast on Azure and the Cloud. We were talking about the magic for a successful episode, but then realized the ingredients that Rob came up with were generic enough that they were the essential for anyone who is teaching or advocating for a technology.

Personally I don't believe in "evangelism" in a technical context and I dislike the term "Technology Evangelism." Not only does it evoke unnecessary zealotry but it also implies that your religion technology is not only what's best for someone, but that it's the only solution. Java people shouldn't try to convert PHP people. That's all nonsense, of course. I like the word "advocate" because you're (hopefully) advocating for the right solution regardless of technology.

Here's the 11 herbs and spices that are needed for a great technical talk, a good episode of a podcast or show, or a decent career talking and teaching about tech.

Empathy for the guest – When talking to another person, never let someone flounder and fail – compensate when necessary so they are successful. Empathy for the audience – Stay conscious that you're delivering an talk/episode/post that people want to watch/read. Improvisation – Learn how to think on your feet and keep the conversation going (“Yes, and…”) Consider ComedySportz or other mind exercises. Listening – Don't just wait to for your turn to speak, just to say something, and never interrupt to say it. Be present and conscious and respond to what you’re hearing Speaking experience – Do the work. Hundreds of talks. Hundreds of interviews. Hundreds of shows. This ain’t your first rodeo. Being good means hard work and putting in the hours, over years, whether it's 10 people in a lunch presentation or 2000 people in a keynote, you know what to articulate. Technical experience – You have to know the technology. Strive to have context and personal experiences to reference. If you've never built/shipped/deployed something real (multiple times) you're just talking. Be a customer – You use the product, every day, and more than just to demo stuff. Run real sites, ship real apps, multiple times. Maintain sites, have sites go down and wake up to fix them. Carry the proverbial pager. Physical mannerisms – Avoid having odd personal ticks and/or be conscious of your performance on video. I know what my ticks are and I'm always trying to correct them. It's not self-critical, it's self-aware. Personal brand – I'm not a fan of "personal branding" but here's how I think of it. Show up. (So important.) You’re a known quantity in the community. You're reliable and kind. This lends credibility to your projects. Lend your voice and amplify others. Be yourself consistently and advocate for others, always. Confidence – Don't be timid in what you have to say BUT be perfectly fine with saying something that the guest later corrects. You're NOT the smartest person in the room. It's OK just to be a person in the room. Production awareness – Know how to ensure everything is set to produce a good presentation/blog/talk/video/sample (font size, mic, physical blocking, etc.) Always do tech checks. Always.

These are just a few tips but they've always served me well. We've done 450 episodes of Azure Friday and I've done nearly 650 episodes of the Hanselminutes Tech Podcast. Please Subscribe!

Related Links VIDEO: How to get started with technical public speaking! 11 Top Tips for a Successful Technical Presentation Technical Presentations: Be Prepared for Absolute Chaos Tips for Preparing for a Technical Presentation

* pic from stevebustin used under CC.

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Developing locally with ASP.NET Core under HTTPS, SSL, and Self-Signed Certs

Aug 2, 2018

Description:

Last week on Twitter @getify started an excellent thread pointing out that we should be using HTTPS even on our local machines. Why?

You want your local web development set up to reflect your production reality as much as possible. URL parsing, routing, redirects, avoiding mixed-content warnings, etc. It's very easy to accidentally find oneself on http:// when everything in 2018 should be under https://.

I'm using ASP.NET Core 2.1 which makes local SSL super easy. After installing from http://dot.net I'll "dotnet new razor" in an empty folder to make a quick web app.

Then, when I "dotnet run" I see two URLs serving pages:

C:\Users\scott\Desktop\localsslweb> dotnet run
Hosting environment: Development
Content root path: C:\Users\scott\Desktop\localsslweb
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

One is HTTP over port 5000 and the other is HTTPS over 5001. However, if I hit https://localhost:5001, I may see an error:

Your connection to this site is not secure

That's because this is an untrusted SSL cert that was generated locally:

Untrusted cert

There's a dotnet global tool built into .NET Core 2.1 to help with certs at dev time, called "dev-certs."

C:\Users\scott> dotnet dev-certs https --help

Usage: dotnet dev-certs https [options]

Options:
-ep|--export-path Full path to the exported certificate
-p|--password Password to use when exporting the certificate with the private key into a pfx file
-c|--check Check for the existence of the certificate but do not perform any action
--clean Cleans all HTTPS development certificates from the machine.
-t|--trust Trust the certificate on the current platform
-v|--verbose Display more debug information.
-q|--quiet Display warnings and errors only.
-h|--help Show help information

I just need to run "dotnet dev-certs https --trust" and I'll get a pop up asking if I want to trust this localhost cert..

You want to trust this local cert?

On Windows it'll get added to the certificate store and on Mac it'll get added to the keychain. On Linux there isn't a standard way across distros to trust the certificate, so you'll need to perform the distro specific guidance for trusting the development certificate.

Close your browser and open up again at https://localhost:5001 and you'll see a trusted "Secure" badge in your browser.

Secure

Note also that by default HTTPS redirection is included in ASP.NET Core, and in Production it'll use HTTP Strict Transport Security (HSTS) as well, avoiding any initial insecure calls.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseCookiePolicy();

app.UseMvc();
}

That's it. What's historically been a huge hassle for local development is essentially handled for you. Given that Chrome is marking http:// sites as "Not Secure" as of Chrome 68 you'll want to consider making ALL your sites Secure by Default. I wrote up how to get certs for free with Azure and Let's Encrypt.

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

One click deploy for MakeCode and the amazing AdaFruit Circuit Playground Express

Jul 31, 2018

Description:

Circuit Playground Express and CrickitThere's a ton of great open source hardware solutions out there. Often they're used to teach kids how to code, but their also fun for adults to learn about hardware!

Ultimately that "LED Moment" can be the start of a love affair with open source hardware and software! Arduino is great, as is Arduino talking to the Cloud! If that's too much or too many moving parts, you always start small with other little robot kits for kids.

Recently my 10 year old and I have been playing with the Circuit Playground Express from Adafruit. We like it because it supports not just block-based programming and JavaScript via MakeCode (more on that in a moment) but you can also graduate to Circuit Python. Want to be even more advanced/ You can also use the Arduino IDE to talk to Circuit Playground Express. It's quickly becoming our favorite board. Be sure to get the board, some batteries and a holder, as well as a few alligator clips.

The Circuit Playground Express board is round and has alligator-clip pads around it so you don't have to solder to get started. It has as bunch of sensors for light, temperature, motion, sound, as well as an IR receiver and transmitter and LEDs for visual output. There's a million things you can do with it. This summer Microsoft Research is doing a project a week you can do with the kids in your life with Make Code!

I think the Circuit Playground Express is excellent by itself, but I like that I can stack it on top of the AdaFruit Crickit to make a VERY capable robotics platform. It's an ingenious design where three screws and metal standoffs connect the Crickit to the Circuit Playground and provide a bus for power and communication. The 10 year old wants to make a BattleBot now.

Sitting architecturally on top of all this great hardware is the open source Microsoft Make Code development environment. It's amazing and more people should be talking about it. MakeCode works with LEGO Mindstorms EV3, micro:bit, Circuit Playground Express, Minecraft, Cue robots, Chibichips, and more. The pair of devices is truly awesome.

Frankly I'm blown away at how easy it is and how easily my kids were productive. The hardest part of the whole thing was the last step where they need to copy the compiled code to the Circuit Playground Express. The editor is all online at GitHub https://github.com/Microsoft/pxt and you can run it locally if you like but there's no reason to unless you're developing new packages.

We went to https://makecode.adafruit.com/ for the Circuit Playground Express. We made a new project (and optionally added the Crickit board blocks as an extension) and then got to work. The 10 year old followed a tutorial and made a moisture sensor that uses an alligator clip and a nail to check if our plants need to be watered! (If they do, it beeps and turns red!)

You can see the code here as blocks...

The MakeCode IDE

or see the same code as JavaScript!

let Soil_reading = 0 let dry_value = 0 dry_value = 1500 light.setBrightness(45) light.setAll(0x00ff00) forever(function () { Soil_reading = input.pinA1.value() console.logValue("Soil reading", Soil_reading) if (Soil_reading < dry_value) { light.setAll(0xff0000) music.playTone(262, music.beat(BeatFraction.Half)) } else { light.clear() } pause(__internal.__timePicker(2000)) })

When you've written your code, you just click DOWNLOAD and you'll get a "uf2" file.

Downloading compiled MakeCode UF2 files

Then the hardest part, you plug in the Circuit Playground Express via USB, it shows up as a Drive called "CPLAYBOOT," and you copy that file over. It's easy for techies, but a speed bump for kids.

Downloading compiled MakeCode UF2 files

It's really a genius process where they have removed nearly every obstacle in the hardware. After the file gets copied over (literally after the last byte is written) the device resets and starts running it.

The "Developer's Inner Loop" is as short as possible, so kudos to the team. Code, download, deploy, run/test, repeat.

This loop is fast and clever, but I wanted to speed it up a little so I wrote this little utility to automatically copy your MakeCode file to the Circuit Playground Express. Basically the idea is:

Associate my app with *.uf2 files When launched, look for a local drive labeled CPLAYBOOK and copy the uf2 file over to it.

That's it. It speeds up the experience and saves me a number of clicks. Sure there's batch file/powershell/script ways to do it but this wasn't hard.

static void Main(string[] args) { var sourceFile = args[0]; var drive = (from d in DriveInfo.GetDrives() where d.VolumeLabel == "CPLAYBOOT" select d.RootDirectory).FirstOrDefault(); if (drive == null) { Console.WriteLine("Press RESET on your Circuit Playground Express and try again!"); Environment.Exit(1); } Console.WriteLine($"Found Circuit Playground Express at {drive.FullName}"); File.Copy(sourceFile, Path.Combine(drive.FullName, Path.GetFileName(sourceFile))); }

Then I double click on the uf2 file and get this dialog and SCROLL DOWN and click "Look for another app on this PC." (They are making this hard because they want you to use a Store App, which I haven't made yet)

Selecting a custom app for UF2 files

Now I can just click my uf2 files in Windows Explorer and they'll automatically get deployed to my Circuit Playground Express!

Found Circuit Playground Express at D:\

You can find source here https://github.com/shanselman/MakeCodeLaunchAndCopy if you're a developer, just get .NET Core 2.1 and run my .cmd file on Windows to build it yourself. Feel free to make it a Windows Store App if you're an overachiever. Pull Requests appreciated ;)

Otherwise, get the a little release here https://github.com/shanselman/MakeCodeLaunchAndCopy/releases and unzip the contents into its own folder on Windows, go double-click a UF2 file and point Windows to the MakeCodeLaunchAndCopy.exe file and you're all set!

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

SQL Server on Linux or in Docker plus cross-platform SQL Operations Studio

Jul 27, 2018

Description:

imageI recently met some folks that didn't know that SQL Server 2017 also runs on Linux but they really needed to know. They had a single Windows desktop and a single Windows Server that they were keeping around to run SQL Server. They had long-been a Linux shop and was now fully containerzed...except for this machine under Anna's desk. (I assume The Cloud is next...pro tip: Don't have important servers under your desk). You can even get a license first and decide on the platform later.

You can run SQL Server on a few Linux flavors...

Install on Red Hat Enterprise Linux Install on SUSE Linux Enterprise Server Install on Ubuntu

or, even better, run it on Docker...

Run on Docker

Of course you'll want to do the appropriate volume mapping to keep your database on durable storage. I'm digging being able to spin up a full SQL Server inside a container on my Windows machine with no install.

I've got Docker for Windows on my laptop and I'm using Shayne Boyer's "Docker Why" repo to make the point. Look at his sample DockerCompose that includes both a web frontend and a backend using SQL Server on Linux.

version: '3.0'
services:

mssql:
image: microsoft/mssql-server-linux:latest
container_name: db
ports:
- 1433:1433
volumes:
- /var/opt/mssql
# we copy our scripts onto the container
- ./sql:/usr/src/app
# bash will be executed from that path, our scripts folder
working_dir: /usr/src/app
# run the entrypoint.sh that will import the data AND sqlserver
command: sh -c ' chmod +x ./start.sh; ./start.sh & /opt/mssql/bin/sqlservr;'
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: P@$$w0rdP@$$w0rd

Note his starting command where he's doing an initial population of the database with sample data, then running sqlservr itself. The SQL Server on Linux Docker container includes the "sqlcmd" command line so you can set up the database, maintain it, etc with the same command line you've used on Windows. You can also configure SQL Server from Environment Variables so it makes it easy to use within Docker/Kubernetes. It'll take just a few minutes to get going.

Example:

/opt/mssql-tools/bin/sqlcmd -S localhost -d Names -U SA -P $SA_PASSWORD -I -Q "ALTER TABLE Names ADD ID UniqueIdentifier DEFAULT newid() NOT NULL;"

I cloned his repo (and I have .NET Core 2.1) and did a "docker-compose up" and boom, running a front end under Alpine and backend with SQL Server on Linux.

101→ C:\Users\scott> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e5b4dae93f6d namesweb "dotnet namesweb.dll" 38 minutes ago Up 38 minutes 0.0.0.0:57270->80/tcp, 0.0.0.0:44348->443/tcp src_namesweb_1
5ddffb76f9f9 microsoft/mssql-server-linux:latest "sh -c ' chmod +x ./…" 41 minutes ago Up 39 minutes 0.0.0.0:1433->1433/tcp mssql

Command lines are nice, but SQL Server is known for SQL Server Management Studio, a nice GUI for Windows. Did they release SQL Server on Linux and then expect everyone use Windows to manage it? I say nay nay! Check out the cross-platform and open source SQL Operations Studio, "a data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux." You can download SQL Operations Studio free here.

SQL Ops Studio is really impressive. Here I am querying SQL Server on Linux running within my Docker container on my Windows laptop.

SQL Ops Studio - Cross platform SQL management

As I'm digging in and learning how far cross-platform SQL Server has come, I also checked out the mssql extension for Visual Studio Code that lets you develop and execute SQL against any SQL Server. The VS Code SQL Server Extension is also open source!

Go check it SQL Server in Docker at https://github.com/Microsoft/mssql-docker and try Shayne's sample at https://github.com/spboyer/docker-why

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

Example Code - Opinionated ContosoUniversity on ASP.NET Core 2.0's Razor Pages

Jul 25, 2018

Description:

The best way to learn about code isn't just writing more code - it's reading code! Not all of it will be great code and much of it won't be the way you would do it, but it's a great way to expand your horizons.

In fact, I'd argue that most people aren't reading enough code. Perhaps there's not enough clean code bases to check out and learn from.

I was pleased to stumble on this code base from Jimmy Bogard called Contoso University at https://github.com/jbogard/ContosoUniversityDotNetCore-Pages.

There's a LOT of good stuff to read in this repo so I won't claim to have read it all or as deeply as I could. In fact, there's a good solid day of reading and absorbing here.However, here's some of the things I noticed and that I appreciate. Some of this is very "Jimmy" code, since it was written for and by Jimmy. This is a good thing and not a dig. We all collect patterns and make libraries and develop our own spins on architectural styles. I love that Jimmy collects a bunch of things he's created or contributed to over the years and put it into a nice clear sample for us to read. As Jimmy points out, there's a lot in https://github.com/jbogard/ContosoUniversityDotNetCore-Pages to explore:

CQRS pattern and MediatR AutoMapper for automatically mapping "left hand/right hand" objects Vertical slice architecture Razor Pages Fluent Validation and Shouldly HtmlTags object model for generating HTML  Entity Framework Core Clone and Build just works

A low bar, right? You'd be surprised how often I git clone someone's repository and they haven't tested it elsewhere. Bonus points for a build.ps1 that bootstraps whatever needs to be done. I had .NET Core 2.x on my system already and this build.ps1 got the packages I needed and built the code cleanly.

It's an opinioned project with some opinions. ;) And that's great, because it means I'll learn about techniques and tools that I may not have used before. If someone uses a tool that's not the "defaults" it may me that the defaults are lacking!

Build.ps1 is using a build script style taken from PSake, a powershell build automation tool. It's building to a folder called ./artifacts as as convention. Inside build.ps1, it's using Roundhouse, a Database Migration Utility for .NET using sql files and versioning based on source control http://projectroundhouse.org It's set up for Continuous Integration in AppVeyor, a lovely CI/CD system I use myself. It uses the Octo.exe tool from OctopusDeploy to package up the artifacts. Organized and Easy to Read

I'm finding the code easy to read for the most part. I started at Startup.cs to just get a sense of what middleware is being brought in.

public void ConfigureServices(IServiceCollection services) { services.AddMiniProfiler().AddEntityFramework(); services.AddDbContext<SchoolContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); services.AddAutoMapper(typeof(Startup)); services.AddMediatR(typeof(Startup)); services.AddHtmlTags(new TagConventions()); services.AddMvc(opt => { opt.Filters.Add(typeof(DbContextTransactionPageFilter)); opt.Filters.Add(typeof(ValidatorPageFilter)); opt.ModelBinderProviders.Insert(0, new EntityModelBinderProvider()); }) .SetCompatibilityVersion(CompatibilityVersion.Version_2_1) .AddFluentValidation(cfg => { cfg.RegisterValidatorsFromAssemblyContaining<Startup>(); }); } Here I can see what libraries and helpers are being brought in, like AutoMapper, MediatR, and HtmlTags. Then I can go follow up and learn about each one. MiniProfiler

I've always loved MiniProfiler. It's a hidden gem of .NET and it's been around being awesome forever. I blogged about it back in 2011! It sits in the corner of your web page and gives you REAL actionable details on how your site behaves and what the important perf timings are.

MiniProfiler is the profiler you didn't know you needed

It's even better with EF Core in that it'll show you the generated SQL as well! Again, all inline in your web site as you develop it.

inline SQL in MiniProfiler

Very nice.

Clean Unit Tests

Jimmy is using XUnit and has an IntegrationTestBase here with some stuff I don't understand, like SliceFixture. I'm marking this as something I need to read up on and research. I can't tell if this is the start of a new testing helper library, as it feels too generic and important to be in this sample.

He's using the CQRS "Command Query Responsibility Segregation" pattern. Here starts with a Create command, sends it, then does a Query to confirm the results. It's very clean and he's got a very isolated test.

[Fact] public async Task Should_get_edit_details() { var cmd = new Create.Command { FirstMidName = "Joe", LastName = "Schmoe", EnrollmentDate = DateTime.Today }; var studentId = await SendAsync(cmd); var query = new Edit.Query { Id = studentId }; var result = await SendAsync(query); result.FirstMidName.ShouldBe(cmd.FirstMidName); result.LastName.ShouldBe(cmd.LastName); result.EnrollmentDate.ShouldBe(cmd.EnrollmentDate); } FluentValidator

https://fluentvalidation.net is a helper library for creating clear strongly-typed validation rules. Jimmy uses it throughout and it makes for very clean validation code.public class Validator : AbstractValidator<Command> { public Validator() { RuleFor(m => m.Name).NotNull().Length(3, 50); RuleFor(m => m.Budget).NotNull(); RuleFor(m => m.StartDate).NotNull(); RuleFor(m => m.Administrator).NotNull(); } } Useful Extensions

Looking at a project's C# extension methods is a great way to determine what the author feels are gaps in the underlying included functionality. These are useful for returning JSON from Razor Pages!

public static class PageModelExtensions { public static ActionResult RedirectToPageJson<TPage>(this TPage controller, string pageName) where TPage : PageModel { return controller.JsonNet(new { redirect = controller.Url.Page(pageName) } ); } public static ContentResult JsonNet(this PageModel controller, object model) { var serialized = JsonConvert.SerializeObject(model, new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore }); return new ContentResult { Content = serialized, ContentType = "application/json" }; } } PaginatedList

I've always wondered what to do with helper classes like PaginatedList. Too small for a package, too specific to be built-in? What do you think?

public class PaginatedList<T> : List<T> { public int PageIndex { get; private set; } public int TotalPages { get; private set; } public PaginatedList(List<T> items, int count, int pageIndex, int pageSize) { PageIndex = pageIndex; TotalPages = (int)Math.Ceiling(count / (double)pageSize); this.AddRange(items); } public bool HasPreviousPage { get { return (PageIndex > 1); } } public bool HasNextPage { get { return (PageIndex < TotalPages); } } public static async Task<PaginatedList<T>> CreateAsync(IQueryable<T> source, int pageIndex, int pageSize) { var count = await source.CountAsync(); var items = await source.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToListAsync(); return new PaginatedList<T>(items, count, pageIndex, pageSize); } }

I'm still reading all the source I can. Absorbing what resonates with me, considering what I don't know or understand and creating a queue of topics to read about. I'd encourage you to do the same! Thanks Jimmy for writing this large sample and for giving us some code to read and learn from!

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

AltCover and ReportGenerator give amazing code coverage on .NET Core

Jul 20, 2018

Description:

I'm continuing to explore testing and code coverage on open source .NET Core. Earlier this week I checked out coverlet. There is also the venerable OpenCover and there's some cool work being done to get OpenCover working with .NET Core, but it's Windows only.

Today, I'm exploring AltCover by Steve Gilham. There are coverage tools that use the .NET Profiling API at run-time, instead, AltCover weaves IL for its coverage.

As the name suggests, it's an alternative coverage approach. Rather than working by hooking the .net profiling API at run-time, it works by weaving the same sort of extra IL into the assemblies of interest ahead of execution. This means that it should work pretty much everywhere, whatever your platform, so long as the executing process has write access to the results file. You can even mix-and-match between platforms used to instrument and those under test.

AltCover is a NuGet package but it's also available as .NET Core Global Tool which is awesome.

dotnet tool install --global altcover.global

This makes "altcover" a command that's available everywhere without adding it to my project.

That said, I'm going to follow the AltCover Quick Start and see how quickly I can get it set up!

I'll Install into my test project hanselminutes.core.tests

dotnet add package AltCover

and then rundotnet test /p:AltCover=true

90.1% Line Coverage, 71.4% Branch CoverageCool. My tests run as usual, but now I've got a coverage.xml in my test folder. I could also generate LCov or Cobertura reports if I'd like. At this point my coverage.xml is nearly a half-meg! That's a lot of good information, but how do I see  the results in a human readable format?

This is the OpenCover XML format and I can run ReportGenerator on the coverage file and get a whole bunch of HTML files. Basically an entire coverage mini website!

I downloaded ReportGenerator and put it in its own folder (this would be ideal as a .NET Core global tool).

c:\ReportGenerator\ReportGenerator.exe -reports:coverage.xml -targetdir:./coverage

Make sure you use a decent targetDir otherwise you might end up with dozens of HTML files littered in your project folder. You might also consider .gitignoring the resulting folder and coverage file. Open up index.htm and check out all this great information!

Coverage Report says 90.1% Line Coverage

Note the Risk Hotspots at the top there! I've got a CustomPageHandler with a significant NPath Complexity and two Views with a significant Cyclomatic Complexity.

Also check out the excellent branch coverage as expressed here in the results of the coverage report. You can see that EnableAutoLinks was always true, so I only ever tested one branch. I might want to add a negative test here and explore if there's any side effects with EnableAutoLinks is false.

Branch Coverage

Be sure to explore AltCover and its Full Usage Guide. There's a number of ways to run it, from global tools, dotnet test, MSBuild Tasks, and PowerShell integration!

For use cases, see Use Cases. For modes of operation, see Modes of Operation. For driving AltCover from dotnet test, see dotnet test integration. For driving AltCover from MSBuild, see MSBuild Tasks. For driving AltCover and associated tools with Windows PowerShell or PowerShell Core, see PowerShell integration.

There's a lot of great work here and it took me literally 10 minutes to get a great coverage report with AltCover and .NET Core. Kudos to Steve on AltCover! Head over to https://github.com/SteveGilham/altcover and give it a STAR, file issues (be kind) or perhaps offer to help out! And most of all, share cool Open Source projects like this with your friends and colleagues.

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.
© 2018 Scott Hanselman. All rights reserved.
     

.NET Core Code Coverage as a Global Tool with coverlet

Jul 18, 2018

Description:

Last week I blogged about "dotnet outdated," an essential .NET Core "global tool" that helps you find out what NuGet package reference you need to update.

.NET Core Global Tools are really taking off right now. They are meant for devs - this isn't a replacement for chocolatey or apt-get - this is more like npm's global developer tools. They're putting together a better way to find and identify global tools, but for now Nate McMaster has a list of some great .NET Core Global Tools on his GitHub. Feel free to add to that list!

.NET tools can be installed like this:

dotnet tool install -g <package id>

So for example:

C:\Users\scott> dotnet tool install -g dotnetsay
You can invoke the tool using the following command: dotnetsay
Tool 'dotnetsay' (version '2.1.4') was successfully installed.
C:\Users\scott> dotnetsay

Welcome to using a .NET Core global tool!

You know, all the important tools. Seriously, some are super fun. ;)

Coverlet is a cross platform code coverage tool that's in active development. In fact, I automated my build with code coverage for my podcast site back in March. I combined VS Code, Coverlet, xUnit, plus these Visual Studio Code extensions

Coverage Gutters - Reads in the lcov.info file (name matters) and highlights lines with color .NET Core Test Explorer - Discovers tests and gives you a nice explorer.

for a pretty nice experience! All free and open source.

I had to write a little PowerShell script because the "dotnet test" command for testing my podcast site with coverlet got somewhat unruly. Coverlet.msbuild was added as a package reference for my project.

dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests

I heard last week that coverlet had initial support for being a .NET Core Global Tool, which I think would be convenient since I could use it anywhere on any project without added references.

dotnet tool install --global coverlet.console

At this point I can type "Coverlet" and it's available anywhere.

I'm told this is an initial build as a ".NET Global Tool" so there's always room for constructive feedback.

From what I can tell, I run it like this:

coverlet .\bin\Debug\netcoreapp2.1\hanselminutes.core.tests.dll --target "dotnet" --targetargs "test --no-build"

Note I have to pass in the already-built test assembly since coverlet instruments that binary and I need to say "--no-build" since we don't want to accidentally rebuild the assemblies and lose the instrumentation.

Coverlet can generate lots of coverage formats like opencover or lcov, and by default gives a nice ASCII table:

88.1% Line Coverage in Hanselminutes.core

I think my initial feedback (I'm not sure if this is possible) is smarter defaults. I'd like to "fall into the Pit of Success." That means, even I mess up and don't read the docs, I still end up successful.

For example, if I type "coverlet test" while the current directory is a test project, it'd be nice if that implied all this as these are reasonable defaults.

.\bin\Debug\whatever\whatever.dll --target "dotnet" --targetargs "test --nobuild"

It's great that there is this much control, but I think assuming "dotnet test" is a fair assumption, so ideally I could go into any folder with a test project and type "coverlet test" and get that nice ASCII table. Later I'd be more sophisticated and read the excellent docs as there's lots of great options like setting coverage thresholds and the like.

I think the future is bright with .NET Global Tools. This is just one example! What's your favorite?

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

Lynx is dead - Long live Browsh for text-based internet browsing

Jul 13, 2018

Description:

The standard for browsing the web over a text-=based terminal is Lynx, right? It's the legendary text web browser that you can read about at https://lynx.invisible-island.net/ or, even better, run right now with

docker run --rm -it nbrown/lynx lynx http://hanselman.com/

Awesome, right? But it's text. Lynx runs alt-text rather than images, and doesn't really take advantage of modern browser capabilities OR modern terminal capabilities.

Enter Browsh! https://www.brow.sh/

Browsh is a fully-modern text-based browser. It renders anything that a modern browser can; HTML5, CSS3, JS, video and even WebGL. Its main purpose is to be run on a remote server and accessed via SSH/Mosh

Imagine running your browser on a remote machine connected to full power while ssh'ing into your hosted browsh instance. I don't know about you, but my laptop is currently using 2 gigs of RAM for Chrome and it's basically just all fans. I might be able to get 12 hours of battery life if I hung out in tmux and used browsh! Not to mention the bandwidth savings. If I'm tethered or overseas on a 3G network, I can still get a great browsing experience and just barely sip data.

Browsing my blog with Browsh

You can even open new tabs! Check out the keybindings! You gotta try it. Works great on Windows 10 with the new console. Just run this one Docker command:

docker run -it --rm browsh/browsh

If you think this idea is silly, that's OK. I think it's brilliant and creative and exactly the kind of clever idea the internet needs. This solves an interesting browser in an interesting way...in fact it returns us back to the "dumb terminal" days, doesn't it?

There was a time when I my low-power machine waited for text from a refrigerator-sized machine. The fridge did the work and my terminal did the least.

Today my high-powered machine waits for text from another high-powered machine and then struggles to composite it all as 7 megs of JavaScript downloads from TheVerge.com. But I'm not bitter. ;)

Check out my podcast site on Browsh. Love it.

Tiny pixelated heads made with ASCII

If you agree that Browsh is amazing and special, consider donating! It's currently maintained by just one person and they just want $1000 a month on their Patreon to work on Browsh all the time! Go tell Tom on Twitter that you think is special, then give him some coins. What an exciting and artful project! I hope it continues!

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

NuKeeper for automated NuGet Package Reference Updates on Build Servers

Jul 10, 2018

Description:

Last week I looked at "dotnet outdated," a super useful .NET Core Global Tool for keeping a project's NuGet packages up to date. Since then I've discovered there's a whole BUNCH of great projects solving different aspects of the "minor version problem." I love this answer "Why?" from the NuKeeper (inspired by Greenkeeper) project with emphasis mine. NuKeeper will check for updates AND try to update your references for you! Why not automate the tedious!

NuGet package updates are a form of change that should be deployed, and we likewise want to change the cycle from "NuGet package updates are infrequent and contain lots of package changes, therefore NuGet package updates are hard and dangerous..." to "NuGet package updates are frequent and contain small changes, therefore NuGet package updates are easy and routine...".

Certainly no one is advocating updating the major versions of your dependent NuGet packages, but small compatible bug fixes come often and are often missed. Including a tool to discover - and optionally apply - these changes in a CI/CD (Continuous Integration/Continuous Deployment) pipeline can be a great timesaver.

Why do we deploy code changes frequently but seldom update NuGet packages?

Good question!

NuKeeper

NuKeeper is a .NET tool as well that you can install safely with:

dotnet tool install --global NuKeeper

Here it is running on my regularly updated podcast website that is running ASP.NET Core 2.1:

NuKeeper says I have 3 packages to update

Looks like three of my packages are out of date. NuKeeper shows what version I have and what I could update to, as well as how long an update has been available.

You can also restrict your updates by policy, so "age=3w" for packages over 3 weeks old (so you don't get overly fresh updates) or "change=minor" or "change=patch" if you trust your upstream packages to not break things in patch releases, etc.

NuKeeper is picking up steam and while (as of the time of this writing) its command line parameter style is a little unconventional, Anthony Steele and the team is very open to feedback with many improvements already in progress as this project matures!

The update functionality is somewhat experimental and currently does 1 update per local run, but I'm really enjoying the direction NuKeeper is going!

Automatic NuGet Updates via Pull Request

NuKeeper has a somewhat unique and clever feature called Repository Mode in that it can automatically issue a Pull Request against your repository with the changes needed to update your NuGet references. Check out this example PullRequest!

Anthony - the lead developer - points out that ideally you'd set up NuKeeper to send PRs for you. Automatic PRs are NuKeepers primary goal and use case!

The NuKeeperBot has automatically issued a PR with a list of packages to update

Again, it's early days, but between NuKeeper and "dotnet outdated," I'm feeling more in control of my package references than ever before! What are YOU using?

Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2018 Scott Hanselman. All rights reserved.
     

Using Flurl to easily build URLs and make testable HttpClient calls in .NET

Jun 23, 2018

Description:

FlurlI posted about using Refit along with ASP.NET Core 2.1's HttpClientFactory earlier this week. Several times when exploring this space (both on Twitter, googling around, and in my own blog comments) I come upon Flurl as in, "Fluent URL."

Not only is that a killer name for an open source project, Flurl is very active, very complete, and very interesting. By the way, take a look at the https://flurl.io/ site for a great example of a good home page for a well-run open source library. Clear, crisp, unambiguous, with links on how to Get It, Learn It, and Contribute. Not to mention extensive docs. Kudos!

Flurl is a modern, fluent, asynchronous, testable, portable, buzzword-laden URL builder and HTTP client library for .NET.

You had me at buzzword-laden! Flurl embraces the .NET Standard and works on .NET Framework, .NET Core, Xamarin, and UWP - so, everywhere.

To use just the Url Builder by installing Flurl. For the kitchen sink (recommended) you'll install Flurl.Http. In fact, Todd Menier was kind enough to share what a Flurl implementation of my SimpleCastClient would look like! Just to refresh you, my podcast site uses the SimpleCast podcast hosting API as its back-end.

My super basic typed implementation that "has a" HttpClient looks like this. To be clear this sample is WITHOUT FLURL.

public class SimpleCastClient
{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com"); //Could also be set in Startup.cs
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
try
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
_logger.LogWarning($"HttpClient: Loading {episodesUrl}");
var res = await _client.GetAsync(episodesUrl);
res.EnsureSuccessStatusCode();
return await res.Content.ReadAsAsync<List<Show>>();
}
catch (HttpRequestException ex)
{
_logger.LogError($"An error occurred connecting to SimpleCast API {ex.ToString()}");
throw;
}
}
}

Let's explore Tim's expression of the same client using the Flurl library!

Not we set up a client in Startup.cs, use the same configuration, and also put in some nice aspect-oriented events for logging the befores and afters. This is VERY nice and you'll note it pulls my cluttered logging code right out of the client!

// Do this in Startup. All calls to SimpleCast will use the same HttpClient instance.
FlurlHttp.ConfigureClient(Configuration["SimpleCastServiceUri"], cli => cli
.Configure(settings =>
{
// keeps logging & error handling out of SimpleCastClient
settings.BeforeCall = call => logger.LogWarning($"Calling {call.Request.RequestUri}");
settings.OnError = call => logger.LogError($"Call to SimpleCast failed: {call.Exception}");
})
// adds default headers to send with every call
.WithHeaders(new
{
Accept = "application/json",
User_Agent = "MyCustomUserAgent" // Flurl will convert that underscore to a hyphen
}));

Again, this set up code lives in Startup.cs and is a one-time thing. The Headers, User Agent all are dealt with once there and in a one-line chained "fluent" manner.

Here's the new SimpleCastClient with Flurl.

using Flurl;
using Flurl.Http;

public class SimpleCastClient
{
// look ma, no client!
private readonly string _baseUrl;
private readonly string _apiKey;

public SimpleCastClient(IConfiguration config)
{
_baseUrl = config["SimpleCastServiceUri"];
_apiKey = config["SimpleCastAPIKey"];
}

public Task<List<Show>> GetShows()
{
return _baseUrl
.AppendPathSegment("v1/podcasts/shownum/episodes.json")
.SetQueryParam("api_key", _apiKey)
.GetJsonAsync<List<Show>>();
}
}

See in GetShows() how we're also using the Url Builder fluent extensions in the Flurl library. See that _baseUrl is actually a string? We all know that we're supposed to use System.Uri but it's such a hassle. Flurl adds extension methods to strings so that you can seamlessly transition from the strings (that we all use) representations of Urls/Uris and build up a Query String, and in this case, a GET that returns JSON.

Very clean!

Flurl also prides itself on making HttpClient testing easier as well. Here's a more sophisticated example of a library from their site:

// Flurl will use 1 HttpClient instance per host
var person = await "https://api.com"
.AppendPathSegment("person")
.SetQueryParams(new { a = 1, b = 2 })
.WithOAuthBearerToken("my_oauth_token")
.PostJsonAsync(new
{
first_name = "Claire",
last_name = "Underwood"
})
.ReceiveJson<Person>();

This example is doing a post with an anonymous object that will automatically turn into JSON when it hits the wire. It also receives JSON as the response. Even the query params are created with a C# POCO (Plain Old CLR Object) and turned into name=value strings automatically.

Here's a test Flurl-style!

// fake & record all http calls in the test subject
using (var httpTest = new HttpTest()) {
// arrange
httpTest.RespondWith(200, "OK");
// act
await sut.CreatePersonAsync();
// assert
httpTest.ShouldHaveCalled("https://api.com/*")
.WithVerb(HttpMethod.Post)
.WithContentType("application/json");
}

Flurl.Http includes a set of features to easily fake and record HTTP activity. You can make a whole series of assertions about your APIs:

httpTest.ShouldHaveCalled("http://some-api.com/*")
.WithVerb(HttpMethd.Post)
.WithContentType("application/json")
.WithRequestBody("{\"a\":*,\"b\":*}") // supports wildcards
.Times(1)

All in all, it's an impressive set of tools that I hope you explore and consider for your toolbox! There's a ton of great open source like this with .NET Core and I'm thrilled to do a small part to spread the word. You should to!

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

Using ASP.NET Core 2.1's HttpClientFactory with Refit's REST library

Jun 20, 2018

Description:

Strong by Lucyb_22 used under Creative Commons from FlickrWhen I moved my podcast site over to ASP.NET Core 2.1 I also started using HttpClientFactory and wrote up my experience. It's a nice clean way to centralize both settings and policy for your HttpClients, especially if you're using a lot of them to talk to a lot of small services.

Last year I explored Refit, an automatic type-safe REST library for .NET Standard. It makes it super easy to just declare the shape of a client and its associated REST API with a C# interface:

public interface IGitHubApi
{
[Get("/users/{user}")]
Task<User> GetUser(string user);
}

and then ask for an HttpClient that speaks that API's shape, then call it. Fabulous.

var gitHubApi = RestService.For<IGitHubApi>("https://api.github.com");

var octocat = await gitHubApi.GetUser("octocat");

But! What does Refit look like and how does it work in an HttpClientFactory-enabled world? Refit has recently been updated with first class support for ASP.NET Core 2.1's HttpClientFactory with the Refit.HttpClientFactory package.

Since you'll want to centralize all your HttpClient configuration in your ConfigureServices method in Startup, Refit adds a nice extension method hanging off of Services.

You add a RefitClient of a type, then add whatever other IHttpClientBuilder methods you want afterwards:

services.AddRefitClient<IWebApi>()
.ConfigureHttpClient(c => c.BaseAddress = new Uri("https://api.example.com"));
// Add additional IHttpClientBuilder chained methods as required here:
// .AddHttpMessageHandler<MyHandler>()
// .SetHandlerLifetime(TimeSpan.FromMinutes(2));

Of course, then you can just have your HttpClient automatically created and passed into the constructor. You'll see in this sample from their GitHub that you get an IWebAPI (that is, whatever type you want, like my IGitHubApi) and just go to town with a strongly typed interfaces of an HttpClient with autocomplete.

public class HomeController : Controller
{
public HomeController(IWebApi webApi)
{
_webApi = webApi;
}

private readonly IWebApi _webApi;

public async Task<IActionResult> Index(CancellationToken cancellationToken)
{
var thing = await _webApi.GetSomethingWeNeed(cancellationToken);

return View(thing);
}
}

Refit is easy to use, and even better with ASP.NET Core 2.1. Go get Refit and try it today!

* Strong image by Lucyb_22 used under Creative Commons from Flickr

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

Penny Pinching in the Cloud: Deploying Containers cheaply to Azure

Jun 15, 2018

Description:

imageI saw a tweet from a person on Twitter who wanted to know the easiest and cheapest way to get an Web Application that's in a Docker Container up to Azure. There's a few ways and it depends on your use case.

Some apps aren't web apps at all, of course, and just start up in a stateless container, do some work, then exit. For a container like that, you'll want to use Azure Container Instances. I did a show and demo on this for Azure Friday.

Azure Container Instances

Using the latest Azure CLI  (command line interface - it works on any platform), I just do these commands to start up a container quickly. Billing is per-second. Shut it down and you stop paying. Get in, get out.

Tip: If you don't want to install anything, just go to https://shell.azure.com to get a bash shell and you can do these command there, even on a Chromebook.

I'll make a "resource group" (just a label to hold stuff, so I can delete it en masse later). Then "az container create" with the image. Note that that's a public image from Docker Hub, but I can also use a private Container Registry or a private one in Azure. More on that in a second.

Anyway, make a group (or use an existing one), create a container, and then either hit the IP I get back or I can query for (or guess) the full name. It's usually dns=name-label.location.azurecontainer.io.

> az group create --name someContainers --location westus
Location Name
---------- --------------
westus someContainers
> az container create --resource-group someContainers --name fancypantscontainer --image microsoft/aci-helloworl
d --dns-name-label fancy-container-demo --ports 80
Name ResourceGroup ProvisioningState Image IP:ports CPU/Memory OsType Location
------------------- --------------- ------------------- ------------------------ ---------------- --------------- -------- ----------
fancypantscontainer someContainers Pending microsoft/aci-helloworld 40.112.167.31:80 1.0 core/1.5 gb Linux westus
> az container show --resource-group someContainers --name fancypantscontainer --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table
FQDN ProvisioningState
--------------------------------------------- -------------------
fancy-container-demo.westus.azurecontainer.io Succeeded

Boom, container in the cloud, visible externally (if I want) and per-second billing. Since I made and named a resource group, I can delete everything in that group (and stop billing) easily:

> az group delete -g someContainers

This is cool because I can basically run Linux or Windows Containers in a "serverless" way. Meaning I don't have to think about VMs and I can get automatic, elastic scale if I like.

Azure Web Apps for Containers

ACI is great for lots of containers quickly, for bringing containers up and down, but I like my long-running web apps in Azure Web Apps for Containers. I run 19 Azure Web Apps today via things like Git/GitHub Deploy, publish from VS, or CI/CD from VSTS.

Azure Web Apps for Containers is the same idea, except I'm deploying containers directly. I can do a Single Container easily or use Docker Compose for multiple.

I wanted to show how easy it was to set this up so I did a video (cold, one take, no rehearsal, real accounts, real app) and put it on YouTube. It explains "How to Deploy Containers cheaply to Azure" in 21 minutes. It could have been shorter, but I also wanted to show how you can deploy from both Docker Hub (public) or from your own private Azure Container Registry.

I did all the work from the command line using Docker commands where I just pushed to my internal registry!

> docker login hanselregistry.azurecr.io
> docker build -t hanselregistry.azurecr.io/podcast .
> docker push hanselregistry.azurecr.io/podcast

Took minutes to get my podcast site running on Azure in Web Apps for Containers. And again - this is the penny pinching part - keep control of the App Service Plan (the VM underneath the App Service) and use the smallest one you can and pack the containers in tight.

Watch the video, and note when I get to the part where I add create an "App Service Plan." Again, that's the VM under a Web App/App Service. I have 19 smallish websites inside a Small (sometime a Medium, I can scale it whenever) App Service. You should be able to fit 3-4 decent sites in small ones depending on memory and CPU characteristics of the site.

Click Pricing Plan and you'll get here:

Recommend Pricing tiers have many choices

Be sure to explore the Dev/Test tab on the left as well. When you're making a non-container-based App Service you'll see F1 and D1 for Free and Shared. Both are fine for small websites, demos, hosting your github projects, etc.

Free, Shared, or Basic Infrastructure

If you back up and select Docker as the "OS"...

Windows, Linux, or Docker

Again check out Dev/Test for less demanding workloads and note B1 - Basic.

B1 is $32.74

The first B1 is free for 30 days! Good to kick the tires. Then as of the timing of this post it's US$32.74 (Check pricing for different regions and currencies) but has nearly 2 gigs of RAM. I can run several containers in there.

Just watch your memory and CPU and pack them in. Again, more money means better perf, but the original ask here was how to save money.

Low CPU and 40% memory

To sum up, ACI is great for per-second billing and spinning up n containers programmatically and getting out fast) and App Service for Containers is an easy way to deploy your Dockerized apps. Hope this helps.

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

EarTrumpet 2.0 makes Windows 10's audio subsystem even better...and it's free!

Jun 13, 2018

Description:

EarTrumpetLast week I blogged about some new audio features in Windows 10 that make switching your inputs and outputs easier, but even better, allow you to set up specific devices for specific programs. That means I can have one mic and headphones for Audition, while another for browsing, and yet another set for Skype.

However, while doing my research and talking about this on Twitter, lots of people started recommending I check out "EarTrumpet" - it's an applet that lets you control the volume of classic and modern Windows Apps in one nice UI! Switching, volume, and more. Consider EarTrumpet a prosumer replacement for the little Volume icon down by the clock in Windows 10. You'll hide the default one and drag EarTrumpet over in its place and use it instead!

EarTrumpet

EarTrumpet is available for free in the Windows Store and works on all versions of Windows 10, even S! I have no affiliation with the team that built it and it's a free app, so you have literally nothing to lose by trying it out!

EarTrumpet is also open source and on GitHub. The team that built it is:

Rafael Rivera - a software forward/reverse engineer David Golden - lead engineer on MetroTwit, the greatest WPF Twitter Client the world has never known. Dave Amenta - ex-Microsoft, worked on shell and Start menu for Windows 8 and 10

It was originally built as a replacement for the Volume Control in Windows back in 2015, but EarTrumpet 2.0's recent release makes it easy to use the new audio capabilities in the Windows 10's April 2018 Update.

Looks Good

It's easy to make a crappy Windows App. Heck, it's easy to make a crappy app. But EarTrumpet is NOT just an "applet" or an app. It's a perfect example of how a Windows 10 app - not made by Microsoft - can work and look seamlessly with the operating system. You'll think it's native - and it adds functionality that probably should be built in to Windows!

It's got light/dark theme support (no one bothers to even test this, but EarTrumpet does) and a nice acrylic blur. It looks like it's built-in/in-box. There's a sample app so you can make your apps look this sharp up on Rafael's GitHub and here's the actual BlurWindowExtensions that EarTrumpet uses.

Works Good

Quickly switch outputEarTrumpet 1.x works on Windows "RS3 and below" so that's 10.0.16299 and down. But 2.0 works on the latest Windows and is also written entirely in C#. Any remaining C++ code has been removed with no missing functionality.

EarTrumpet may SEEM like a simple app but there's a lot going on to be this polished AND work with any combination of audio hardware. As a podcaster and remote workers I have a LOT of audio devices but I now have one-click control over it all.

Given how fast Windows 10 has been moving with Insiders Builds and all, it seems like there's a bunch of APIs with new functionality that lacks docs. The EarTrumpet team has reverse engineered the parts the needed.

Modern Resource Technology (MRT) Resource Manager Used to pull resources out of AppX apps on the local machine without manually trying to crack open PRI files, etc. Background: https://docs.microsoft.com/en-us/windows/uwp/app-resources/using-mrt-for-converted-desktop-apps-and-games and Code here. They're investigating moving off to now-available Windows.ApplicationModel.Resources.* API Internal Audio Interface: IAudioPolicyConfigFactory Gets them access to new APIs (GetPersistedDefaultAudioEndpoint / SetPersistedDefaultAudioEndpoint) in RS4 that let's them 'redirect' apps to different playback devices. Same API used in modern sound settings. Code here with no public API yet?

Internal Audio Interface: IPolicyConfig Gets them access to SetDefaultEndpoint API; lets us change the default playback device Code here and no public API yet?

Acrylic Blur (win32) They want to match shell UI/UX so need to look and feel the same Blog post: https://withinrafael.com/2018/02/01/adding-acrylic-blur-to-your-windows-10-apps-redstone-4-desktop-apps/ Code here and they're evaluating potential replacements with some composition APIs now in RS4

From a development/devops perspective, I am told EarTrumpet's team is able to push a beta flight through the Windows 10 Store in just over 30 minutes. No waiting for days to get beta test data. They use Bugsnag for their generous OSS license to catch crashes and telemetry. So far they're getting >3000 new users a month as the word gets out with nearly 100k users so far! Hopefully +1 as you give EarTrumpet a try yourself!

Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.


© 2018 Scott Hanselman. All rights reserved.
     

ASP.NET Core Architect David Fowler's hidden gems in 2.1

Jun 11, 2018

Description:

ASP.NET Architect David FowlerOpen source ASP.NET Core 2.1 is out, and Architect David Fowler took to twitter to share some hidden gems that not everyone knows about. Sure, it's faster, builds faster, runs faster, but there's a number of details and fun advanced techniques that are worth a closer look at.

.NET Generic Host

ASP.NET Core introduced a new hosting model. .NET apps configure and launch a host.

The host is responsible for app startup and lifetime management. The goal of the Generic Host is to decouple the HTTP pipeline from the Web Host API to enable a wider array of host scenarios. Messaging, background tasks, and other non-HTTP workloads based on the Generic Host benefit from cross-cutting capabilities, such as configuration, dependency injection (DI), and logging.

This means that there's not just a WebHost anymore, there's a Generic Host for non-web-hosting scenarios. You get the same feeling as with ASP.NET Core and all the cool features like DI, logging, and config. The sample code for a Generic Host is up on GitHub.

IHostedService

A way to run long running background operations in both the generic host and in your web hosted applications. ASP.NET Core 2.1 added support for a BackgroundService base class that makes it trivial to write a long running async loop. The sample code for a Hosted Service is also up on GitHub.

Check out a simple Timed Background Task:

public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Timed Background Service is starting.");

_timer = new Timer(DoWork, null, TimeSpan.Zero,
TimeSpan.FromSeconds(5));

return Task.CompletedTask;
}

Fun!

Windows Services on .NET Core

You can now host ASP.NET Core inside a Windows Service! Lots of people have been asking for this. Again, no need for IIS, and you can host whatever makes you happy. Check out Microsoft.AspNetCore.Hosting.WindowsServices on NuGet and extensive docs on how to host your own ASP.NET Core app without IIS on Windows as a Windows Service.

public static void Main(string[] args)
{
var pathToExe = Process.GetCurrentProcess().MainModule.FileName;
var pathToContentRoot = Path.GetDirectoryName(pathToExe);

var host = WebHost.CreateDefaultBuilder(args)
.UseContentRoot(pathToContentRoot)
.UseStartup<Startup>()
.Build();

host.RunAsService();
} IHostingStartup - Configure IWebHostBuilder with an Assembly Attribute

Simple and clean with source on GitHub as always.

[assembly: HostingStartup(typeof(SampleStartups.StartupInjection))] Shared Source Packages

This is an interesting one you should definitely take a moment and pay attention to. It's possible to build packages that are used as helpers to share source code. We internally call these "shared source packages." These are used all over ASP.NET Core for things that should be shared BUT shouldn't be public APIs. These get used but won't end up as actual dependencies of your resulting package.

They are consumed like this in a CSPROJ. Notice the PrivateAssets attribute.

<PackageReference Include="Microsoft.Extensions.ClosedGenericMatcher.Sources" PrivateAssets="All" Version="" />
<PackageReference Include="Microsoft.Extensions.ObjectMethodExecutor.Sources" PrivateAssets="All" Version="" /> ObjectMethodExecutor

If you ever need to invoke a method on a type via reflection and that method could be async, we have a helper that we use everywhere in the ASP.NET Core code base that is highly optimized and flexible called the ObjectMethodExecutor.

The team uses this code in MVC to invoke your controller methods. They use this code in SignalR to invoke your hub methods. It handles async and sync methods. It also handles custom awaitables and F# async workflows

SuppressStatusMessages

A small and commonly requested one. If you hate the output that dotnet run gives when you host a web application (printing out the binding information) you can use the new SuppressStatusMessages extension method.

WebHost.CreateDefaultBuilder(args)
.SuppressStatusMessages(true)
.UseStartup<Startup>(); AddOptions

They made it easier in 2.1 to configure options that require services. Previously, you would have had to create a type that derived from IConfigureOptions<TOptions>, now you can do it all in ConfigureServices via AddOptions<TOptions>

public void ConfigureServicdes(IServiceCollection services)
{
services.AddOptions<MyOptions>()
.Configure<IHostingEnvironment>((o,env) =>
{
o.Path = env.WebRootPath;
});
} IHttpContext via AddHttpContextAccessor

You likely shouldn't be digging around for IHttpContext, but lots of folks ask how to get to it and some feel it should be automatic. It's not registered by default since having it has a performance cost. However, in ASP.NET Core 2.1 a PR was put in for an extension method that makes it easy IF you want it.

services.AddHttpContextAccessor();

So ASP.NET Core 2.1 is out and ready to go

New features in this release include: SignalR – Add real-time web capabilities to your ASP.NET Core apps. Razor class libraries – Use Razor to build views and pages into reusable class libraries. Identity UI library & scaffolding – Add identity to any app and customize it to meet your needs. HTTPS – Enabled by default and easy to configure in production. Template additions to help meet some GDPR requirements – Give users control over their personal data and handle cookie consent. MVC functional test infrastructure – Write functional tests for your app in-memory. [ApiController], ActionResult<T> – Build clean and descriptive web APIs. IHttpClientFactory – HttpClient client as a service that you can centrally manage and configure. Kestrel on Sockets – Managed sockets replace libuv as Kestrel's default transport. Generic host builder – Generic host infrastructure decoupled from HTTP with support for DI, configuration, and logging. Updated SPA templates – Angular, React, and React + Redux templates have been updated to use the standard project structures and build systems for each framework (Angular CLI and create-react-app).

Check out What's New in ASP.NET Core 2.1 in the ASP.NET Core docs to learn more about these features. For a complete list of all the changes in this release, see the release notes.

Go give it a try. Follow this QuickStart and you can have a basic Web App up in 10 minutes.

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Carriage Returns and Line Feeds will ultimately bite you - Some Git Tips

Jun 5, 2018

Description:

Typewriter by Matunos used under Creative CommonsWhat's a Carriage and why is it Returning? Carriage Return Line Feed WHAT DOES IT ALL MEAN!?!

The paper on a typewriter rides horizontally on a carriage. The Carriage Return or CR was a non-printable control character that would reset the typewriter to the beginning of the line of text.

However, a Carriage Return moves the carriage back but doesn't advance the paper by one line. The carriage moves on the X axes...

And Line Feed or LF is the non-printable control character that turns the Platen (the main rubber cylinder) by one line.

Hence, Carriage Return and Line Feed. Two actions, and for years, two control characters.

Every operating system seems to encode an EOL (end of line) differently. Operating systems in the late 70s all used CR LF together literally because they were interfacing with typewriters/printers on the daily.

Windows uses CRLF because DOS used CRLF because CP/M used CRLF because history.

Mac OS used CR for years until OS X switched to LF.

Unix used just a single LF over CRLF and has since the beginning, likely because systems like Multics started using just LF around 1965. Saving a single byte EVERY LINE was a huge deal for both storage and transmission.

Fast-forward to 2018 and it's maybe time for Windows to also switch to just using LF as the EOL character for Text Files.

Why? For starters, Microsoft finally updated Notepad to handle text files that use LF.

BUT

Would such a change be possible? Likely not, it would break the world. Here's NewLine on .NET Core.

public static String NewLine { get { Contract.Ensures(Contract.Result() != null); #if !PLATFORM_UNIX return "\r\n"; #else return "\n"; #endif // !PLATFORM_UNIX } }

Regardless, if you regularly use Windows and WSL (Linux on Windows) and Linux together, you'll want to be conscious and aware of CRLF and LF.

I ran into an interesting situation recently. First, let's review what Git does

You can configure .gitattributes to tell Git how to to treat files, either individually or by extension.

When

git config --global core.autocrlf true

is set, git will automatically convert files quietly so that they are checked out in an OS-specific way. If you're on Linux and checkout, you'll get LF, if you're on Windows you'll get CRLF.

Viola on Twitter offers an important clarification:

"gitattributes controls line ending behaviour for a repo, git config (especially with --global) is a per user setting."

99% of the time system and the options available works great.

Except when you are sharing file systems between Linux and Windows. I use Windows 10 and Ubuntu (via WSL) and keep stuff in /mnt/c/github.

However, if I pull from Windows 10 I get CRLF and if I pull from Linux I can LF so then my shell scripts MAY OR MAY NOT WORK while in Ubuntu.

I've chosen to create a .gitattributes file that set both shell scripts and PowerShell scripts to LF. This way those scripts can be used and shared and RUN between systems.

*.sh eol=lf *.ps1 eol=lf

You've got lots of choices. Again 99% of the time autocrlf is the right thing.

From the GitHub docs:

You'll notice that files are matched--*.c, *.sln, *.png--, separated by a space, then given a setting--text, text eol=crlf, binary. We'll go over some possible settings below. text=auto Git will handle the files in whatever way it thinks is best. This is a good default option. text eol=crlf Git will always convert line endings to CRLF on checkout. You should use this for files that must keep CRLF endings, even on OSX or Linux. text eol=lf Git will always convert line endings to LF on checkout. You should use this for files that must keep LF endings, even on Windows. binary Git will understand that the files specified are not text, and it should not try to change them. The binary setting is also an alias for -text -diff.

Again, the defaults are probably correct. BUT - if you're doing weird stuff, sharing files or file systems across operating systems then you should be aware.

Edward Thomson, a co-maintainer of libgit2, has this to say and points us to his blog post on Line Endings.

I would say this more strongly. Because `core.autocrlf` is configured in a scope that's per-user, but affects the way the whole repository works, `.gitattributes` should _always_ be used.

If you're having trouble, it's probably line endings. Edward's recommendation is that ALL projects check in a .gitattributes.

The key to dealing with line endings is to make sure your configuration is committed to the repository, using .gitattributes. For most people, this is as simple as creating a file named .gitattributes at the root of your repository that contains one line:
* text=auto

Hope this helps!

I hope Microsoft bought Github so they can fix this CRLF vs LF issue.

— Scott Hanselman (@shanselman) June 4, 2018

* Typewriter by Matunos used under Creative Commons

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2018 Scott Hanselman. All rights reserved.
     

Automatically change your Audio Input, Output and Volume per application in Windows 10

May 31, 2018

Description:

I recently blogged about an amazing little utility called AudioSwitcher that makes it two-clicks easy to switch your audio inputs and outputs. I need to switch audio devices a lot as I'm either watching video, doing a podcast, doing a conference call, playing a game, etc. That's at least three different "scenarios" for my audio setup. I've got 5 inputs and 5 outputs and I've seen PC audiophiles with even more.

I set up this AudioSwitcher and figured, cool, solved that silly problem. Then I got "EarTrumpet" - it's an applet that lets you control the volume of classic and modern Windows Apps in one nice UI! Switching, volume, and more. Very "prosumer," which is me, so I dig it.

A little birdie said that I should also look closer at Windows 10 itself. What? I know this OS like the back of my hand! Nonsense!

Hit the Start Menu and search for either "Sound Mixer" or "App Volume"

Sound mixer options

There's a page that does double duty called App Volume and Device Preferences.

You can also get to it from the regular Settings | Audio page:

change the device or app volume

See where it says "Change the device or app volume?" Ok, now DRINK THIS IN.

You can set the volume in active apps on an app-by-app basis. Cool. NOT IMPRESSED ARE YOU? Of course not, because while that's a lovely feature it's not the hidden power I'm talking about.

You can set the Preferred Input and Output device on an App by App Basis.

App Volume and Device Preferences

You can set the Preferred Input and Output device on an App by App Basis.

Read that again. I'll wait.

Rather than me constantly using the Audio Switcher (lovely as it is) I'll just set my ins and outs for each app.

The only catch is that this list only shows the apps that are currently using the mic/speaker, so if you want to get a nice setup, you'll want to run apps in order to change the settings for your app.

Here I've got the system sounds running through Default (usually the main speakers and the default mic is a webcam) The Speech Runtime (I use WIN+H to use Windows 10 built-in Dragon-Naturally-Style-But-Not free dictation in any app) uses the Webcam mic explicitly as it has the best recognition in my experience. Skype for Business is now using the phone. You can certainly set these things in the apps themselves, but in my experience Skype for Business doesn't care about your feelings or your audio settings. ;) I record my podcast with Zencastr so I've setup Chrome for my preferred/optimal settings.

I can still use the AudioSwitcher but now my defaults are contextual so I'm switching a LOT LESS.

Be sure to pick up "EarTrumpet" for even more advanced options!

What do you think? Did YOU know this existed?

Sponsor: Learn how .NET in 2018 addresses the challenges developers are working on with future-focused technology. Get the new whitepaper on "The State of .NET in 2018" by the Progress Telerik team!


© 2018 Scott Hanselman. All rights reserved.
     

Securing an Azure App Service Website under SSL in minutes with Let's Encrypt

May 29, 2018

Description:

A screenshot that says "Your connection to this site is not secure."Let’s Encrypt is a free, automated, and open Certificate Authority. That means you can get free SSL certs and change your sites from http:// to https://. What's the catch? The SSL Certificates only last 90 days - not a year or years. They do this to encourage automation. If you set this up, you'll want to have some scripts or background process to automatically renew and install the certificates.

I run nearly two dozen websites (some small, some significant) on Azure. Given that Chrome 68+ is going to call out non HTTPS sites explicitly as "Not secure" in July, now's as good a time as any for us to get our sites - large and small - encrypted. I have some small static "brochure-ware" sites like http://babysmash.com that just aren't worth the money for a cert. Now it's free, so let's do it.

In some theorectical future, I hope that Azure and Clouds like it will have a single "encrypt it" button and handle the details for us, but as of the date of this blog post, there's some manual initial setup and some great work from the community.

There's a protocol for getting certificates called "ACME" - Automated Certificate Management Environment - and the EFF has a tool called Certbot that helps you request and deploy certs. There is a whole ecosystem around it, and if you are running Windows/IIS you can use a great simple ACME client called "<