Home Network Segmentation, NAT Loopback to VLAN on Ubiquiti Unifi Gear

November 4, 2016 § Leave a comment

I’ve been putting off segmenting my network for a while now, but the recent IoT botnet powered DDoS has bumped the task up my list of priorities, and I finally got around to doing it. Generally, if your network is anything other than non-critical clients accessing the internet, that is to say if you have any sort of IoT devices or it you host any internet-facing services at home, it’s probably a smart thing to split up your network into segments. Doing so allows finer-grained control over which machines can talk to each other, thus enhancing security. A segmented network is usually also easier to survey and audit, because irregularities like “why the hell is there an Acer laptop in my server segment?” stand out more, and with the appropriate monitoring solutions you can more easily generate usage stats by just running queries for an entire segment.

Let’s use the IoT botnet situation as a practical example. If your chinese-made, off-brand IP camera system was rooted, we now know that it has the potential to take part in taking down a third of the internet. But being that a third party has complete control over the device, there’s no telling what what it could do, including spread to other machines on your network, steal some files off an improperly secured network share, etc. Not being able to shitpost to Facebook is one thing, but messing around with my self-hosted stuff and data is where I draw the line.

There exists several models for how to segment networks, but this is one of those things that is really an exercise in common sense. A military buddy once evoked a three tier system that’s the standard for military IP networks; that’s probably great for their use case, but your home isn’t a forward operating base in the horn of Africa, and the military probably doesn’t have the likes of Nests and Augusts in their networks. Thanks to the magic of VLANs, you aren’t bound to a specific number of segments, and you’re probably better off doing more than not enough, within reason. Here’s what my setup looks like. 

homenet

It’s a pretty simple setup, but since the pictures are not really representative, here’s some more explanation.

General access: This is the untagged network that anybody who connects via wifi or wired connection gets assigned to without further configuration. This is the least trusted segment, in that it has limited access to the others, but it’s also protected from the other segments.

Web-facing services: Self-explanatory. This is completely isolated from all the other segments, both in and out, because clients who normally just connect through WAN. There is an exception for local clients trying to connect from General access and Network managment segments, which get NAT’d back in, but those exceptions are made for application specific ports only and everything else can be firewalled out. For Unifi gear, everything needs to talk back to the controller for your network to work, and if that controller is local, you’ll need to punch a few holes in your firewall. More on this later.

Network management: This is for the administration interfaces of switches, routers and access points, in addition to IPMI and other remote KVM options. In a Unifi environment, nobody should be talking to the APs or the switch, so can normally be isolation. A per-host exceptions can be made if you want to access those devices, most likely an IPMI console, with your main machine. Seeing that IPMI practically gives the next best thing to physical access to your systems and has been proven to be a wide-open opportunity for attack, it stands to reason to reason that this segment should be as strongly sealed up through your firewall policy. Note that for now, Unifi APs do not support being managed on a tagged VLAN, although the feature is supposed to be in the works. My own Unifi switch did not have this problem with the latest firmware.

Home Automation: Since home automation products usually only talk to the internet anyways, this is another locked-down segment that can’t speak to anybody. Where necessary, an appliance that needs access to more “commercial-grade” products that actually allow interaction from the LAN can be assigned an address within the VLAN. While it’s not deployed right now, a virtualized machine running Home Assistant would be interesting.


Building a segmented network with a Unifi gateway as your router is a bit different from what could be done on other platforms, since the incomplete GUI controls don’t offer all the options necessary to fine-tuning your setup. The major annoyance is that NAT loopback (aka hairpin or reflection) doesn’t seem to be properly implemented. Port-forwarding via the USG configuration menu works when accessing from the internet, but loopback config seems to assume that you will only be forwarding ports to a single subnet, and hence only need loopback to and from this subnet. We’ll need to fix this.

Unifi USG port forward dialog. Simple enough.

Unifi USG port forward dialog. Simple enough.

I found the fix for this while reading a date how-to for EdgeMAX products. Once you’ve configured your port forwards, you’ll have to manually setup NAT masquerading to the hosts that will be receiving the forwards for loopback to function correctly. There are no means of settings these things up in the GUI, so you’ll have to use the old config.gateway.json trick to manually input some NAT rules.

I strongly recommend learning how to program Edgemax / Unifi gear, because it’s great help in understanding how to modify the config file, but in case you’re in a hurry, here’s a sample of the JSON you’ll need to add.

 
{
     "service": {
          "nat": {
               "rule": {
                    "7001": {
                         "description": "Hairpin NAT Transmission",
                         "destination": {
                               "address": "10.1.2.3",
                               "port": "9091"
                         },
                         "log": "disable",
                         "outbound-interface": "eth1.121",
                         "protocol": "tcp",
                         "source": {
                               "address": "10.1.0.0/24"
                         },
                        "type": "masquerade"
                    }
               }
          }
     }
}

Replace the description, destination address and port, source address range, outbound interface (mind the VLAN!) and you are good to go. You will need one rule for every host. This JSON was built assuming you have nothing in your on-controller config file, do respect the hierarchy if you have other things going on in there, and ALWAYS validate your code.

You’ll need a rule for every port forward you want accessible from your LAN. It might be possible to define those in one shot by defining several destination addresses and ports, but I have’t tested this.

In the event that the destination port on your forward is not identical to the incoming port, you’ll want to configure the inside port in this masquerade for it to work. Be careful, loopback traffic is not exempt from passing through your LAN-IN firewall rules, so you’ll need to configure exemptions to let that through.

This is what my LAN-IN firewall config looks like. Note the exemptions: 1) for NAT loopbacks, 2) for the switch on the management segment, which still needs to talk to the controller which has not yet been migrated from my general access segment. Rule 2003 is disabled for the same reason.

This is what my LAN-IN firewall config looks like. Note the exemptions: 1) for NAT loopbacks, 2) for packets coming back on active connections to services available through loopback connections and 3) for the switch on the management segment, which still needs to talk to the controller which has not yet been migrated from my general access segment. Rule 2004 is disabled for the same reason.

The content of Web-Service-Loopbacks group. Note that I used hosts, not the entire segment, which would negate my isolation rules.

The content of Web-Service-Loopbacks group. Note that I used hosts, not the entire segment, which would negate my isolation rules.

That’s it! While you’re doing the legwork of editing the JSON file, you might want to do things like disable SSH password authentication…

 
{
     "service": {
          "ssh": {
               "disable-password-authentication": "''"
          }
     }
}

…and configure RSA login.

 
{
     "system": {
          "login": {
               "user": {
                    "admin": {
                         "authentication": {
                               "public-keys": {
                                    "user@hostname": {
                                         "key": "aaaaabbbbbcccccddddd",
                                         "type": "ssh-rsa"
                                    }
                               }
                         }
                    }
               }
          }
     }
}

That’s a pretty good start to a robust and secure home network. Let me know if I missed anything.

The Internet of Things Needs Integrators, Not Gadgets

October 18, 2016 § 1 Comment

Crowd-funding and the sharp drop in development costs for internet connected device has given us both an onslaught of useless gadgets, and the much overused buzzword “Internet of Things”, or IoT. The concept has been pushed way past the borders of absurdity, as highlighted by the likes of @InternetofShit. I’m only half surprised: personal computing has plateaued, with many of the “traditional” challenges posed by hardware such as transistor and storage density being more of less solved, what more is there to innovate in? To break the “sit down in front of a screen” paradigm of computing, the industry is shooting in all directions to find new ground. This spray-and-pray approach to innovation is surely finding new ways to do things, but in the frenzy of it, we are offered truly senseless gadgets, from cloud-connected candles to bluetooth kegel exercisers.

All these gadgets are of course intended to be mass-marketed to the public at large, people who are in the aggregate are almost completely technologically illiterate. I’d bet that most of the time, they are used with whatever app they are shipped with, and seldom integrated into other systems through platforms like IFTTT. Because they have to just work even for the least tech savvy, several shortcuts are taken in the design of these gadgets: more often than not they are battery powered, and work in a way which handicaps their usability for more advanced user. They are almost never meant to have a lifetime that exceeds a few years, at most. Almost all the time, they are dependant on some online service running in the cloud, which is susceptible to be hacked, taken down due to an outage, or just plain shut down when the gadget-maker inevitably goes bankrupt or disappears in the fog like so many other tech companies.

All the while this is happening, people buying the gadgets with the intent of having technology make their lives easier are connecting all those devices to 50$ routers which are generally both unreliable and insecure. They pipe enormous amounts of data, some of it sensitive in nature, to servers hosted who knows where. Home automation is on everybody’s lips, but houses are still being built with a basic run of RG6 and a few phone lines as the only data cables present within their walls. Technology as a whole is ubiquitous, but no one technology has gained the traction necessary to signal a significant change; one might have a smart lock and the other a few smart lights, but only the biggest nerds have integrated these things into systems which ACTUALLY make day-to-day living easier or more enjoyable. Absolutely none of these piece-meal IoT innovations has had the impact on the population at large of, say, the electric refrigerator or the good old personal computer. I believe the IoT will remain but a buzzword until the industry leaps over this hurdle and brings forth something that becomes as widespread in it’s adoption.

I feel that home automation is the place where the IoT really has the potential to shine. It’s completely abhorrent that technology we use to interface with the functionalities of our living spaces have been largely unchanged for the past century. With the current concern for making dwellings more energy efficient, I think the time is ripe for more widespread adoption, but certain things have to change. I believe that for IoT to gain traction, particularly in the home automation sphere, certain things need to change; some things need to stop, and others need to start happening more often.

STOP MAKING YOUR DEVICE RELIANT ON THE CLOUD. Off-loading compute and management to the cloud is a smart thing to do, for all sorts of reasons which I won’t bother listing. Most users don’t mind sending massive amounts of data to the cloud, and those that do are generally the type of people to homebrew their own automation solutions, so privacy concerns are not a huge deal. Where relying on the cloud gets annoying is when it, or your access to it, fails. The PetNet fiasco is a good example of this. It’s horribly bad design to rely on inputs from the internet to have things work, and even in 2016, nobody should ever be assumed to have a 100% uptime internet connection. Ubiquiti has this figured out for their wireless access points. They employ a provisioning process that stores configuration locally on the devices, expects regular pingbacks from them, but works just fine when the controller is absent for whatever reason, even when the units get powered down. I’ve had access point setups run without a controller for MONTHS without affecting the product’s foremost functionality, providing wifi.

START GIVING THE OPTION OF NOT USING THE CLOUD AT ALL. In a perfect world, everything would be open source and I could run a copy of Nests backend on my servers if I wanted to. In this imperfect world which is ours, this is not an option. That doesn’t mean that users shouldn’t be given the choice to opt-out of the cloud entirely, and use the array of sensors onboard the device in different ways. Perhaps this is too much of the Apple-championned “protected ecosystem” idea permeating the tech sector at large, but making cloud-management a non-negociable part of the product is surely preventing a lot of these IoT companies from moving units.

Sure, there’s an API for everything, but the fact that I have to call a server on the internet to interact with my thermostat is totally non-sensical. That’s not to mention that API access usually can’t interact with the sensors directly. I’d love to be able to save myself a PIR install and use the presence sensor on my Nest products directly, but sadly, that’s not an option. How hard can it be to have some way of querying devices, via network or otherwise, to use the raw input they can provide?

It’s a matter of time before we start seeing people have their expensive gadgets turn to to paperweights because the company providing the backend goes tits up. By enabling direct-to-device interaction, we can avoid this.

STOP MAKING EVERYTHING WIRELESS. I’m talking for both power and data. While the evolution of ICs and battery tech has us changing batteries less and less often, it’s still an ENORMOUS pain in the ass to change batteries. There’s a reason why fire departments literally have to go door to door to remind people to change the batteries in their smoke alarms. Power redundancy: YES! Battery-only power that makes me buy obscure-sized coin cells every 18 months: HELL NO!

Wireless for data is obviously a go-to for IoT companies because it removes an important barrier to entry for the consumers, who can just takes stuff out of the box, plug it in and enjoy. This is fine if you used RGB lighting as a party trick to impress guests in your living room, but it’s very annoying if you have stuff throughout the house that need to talk to Zigbee-Wifi interface boxes. Soon, you find yourself with Zigbee relays plugged in everywhere, which is an eyesore. I can’t really comment on the security and interference implications of automating an entire house on wireless, but generally, it seems that using whatever wireless protocol is just a lazy workaround to using cables, which are more secure, resilient and effective at transmitting both power and data.

We live in a time where 4-pair network cable can transmit incredible amounts of data all the while powering a small TV, and the IEEE keeps developing on this technology. Start thinking about how you can leverage this.

START EMBRACING PERMANENT INSTALLS AND LONG LIFE CYCLES. People buy homes for years, decades. If you intend to make a product which makes this home better, it should have have a life-cycle that makes sense on the scale of that which it’s going in. Support your products for long enough to make permanent installs a likely consideration, and find new ways to monetize a longer relationship with your customer-base. By all means, innovate and disrupt and do all those things that you startup nerds are up to, but don’t forget the people who bought your product because it solves one of their problems in favour of the fad followers who react strongly to hyped-up release announcements. Think cross-compatibility (of brackets, connectors, wiring) and upgradability (of software, hardware components where applicable). A product that compliments a home shouldn’t be sold like a phone or a smartwatch.

This is wishful thinking, because honestly I don’t think anybody is in the business of making durable goods anymore. It’s a shame, especially considering the recurrent revenu potential of IoT devices in the form of services that compliment the device.

START SUGGESTING HALF-DECENT NETWORKS TO YOUR CUSTOMERS. Defence wonks how work on military drones believe they should be used as a tactical tool for strategic ends, and laments how politicians just use it as a cool toy whenever convenient. I think IoT has the same problem: IoT devices should be considered part of a larger infrastructure, and designed to fit in a well-built core network, not standalone solutions.

This goes back to “stop making all the things wireless” point of course, but I’d really want to see the industry push for a wholistic approach to making your home smart, starting with the basics: a decent routeur, correctly configured and layed-out access points, a hardwired network which is well integrated to the building and a proper rackmount cabinet where everything terminates cleanly. Some will say I’m a gear slut, and to that I will plead guilty as charged, but hear me out. I’m not asking for a 42U cabinet in every home. I’m asking for a semi-standardized way of laying out the vital components of a smart home which makes upkeep easier.

Networking equipment aught to be suggested as an option to owners building new homes; so much of our lives is spent interacting with internet-connected devices, it’s completely ridiculous to still have half-assed retro-fitted solutions providing this vital connectivity.

This will be a hard transition to make because of resistance from both consumers and IoT manufacturers, but in the long run, developing infrastructure within homes onto which companies can develop will assuredly enable additional innovation in automation technologies. Tech people, make a consortium or whatever to push this agenda, and you’ll get to open up new avenues for your business in addition to creating new positions with cool names that you can put on your business cards.

START CHAMPIONING THE EXPERTISE OF INTEGRATORS. This ties in closely with all previous points. No amount of ease of installation and 3rd party integration capabilities will replace well-thought out integration of automation solutions into people’s dwellings. No matter how beautifully designed your product is, ideally it should be (in most cases) entirely hidden and require as little user intervention as possible. People want things that JUST WORK, and I’d argue that many times they are willing to pay people to get their stuff setup to just work.

Entrepreneurs are going to have start offering those services, but there are things that manufacturers can start doing to help a network of home automation integrators appear. Define installation norms, collaborate with your industry peers all the while doing it, and start working on certification and installation referral programs to legitimize the people that are willing to do the legwork in showing the masses what your products can do. Nest has this kind of program, and it’s an honest attempt at getting something like this going. In the end, integrators who know what’s out there and what can be done have the potential to generate sales and create loyal customers. Additionally, they provide priceless technical feedback and possibilities for large-scale beta-testing of new products in real use cases. That’s stuff you usually don’t get when interacting directly with the customer.

The takeaway of this post is that I feel that somehow, companies that bring IoT products to market in the home automation sphere need to stop trying to push units through big-box retailers and design products that are meant to be long-term investments just like air-exchangers, heat-pumps, furniture and large appliances. Give us less gadgets, and more solid solutions to actual problems. Give us less Bluetooth, more PoE ethernet, things that integrate to a house, not a phone. Don’t just offer a cloud API, offer more complete access to your devices themselves, for users which want to use your products as products, and not as services. Maybe then, we can start seeing truly smart homes.

Les «28 pages» et leurs significations pour les relations américano-saoudiennes

August 16, 2016 § Leave a comment

Il y a quelques semaines, une mystérieuse portion de la Commission jointe sur les actions de la communauté de renseignement avant et après les attaques terroristes du 11 septembre 2001 (en anglais, Joint Inquiry into Intelligence Community Actions Before and After the Terrorist Attacks of the September 11th, 2001), a été déclassifiée après deux ans de procédures de révision. Communément connues sous le nom de « 28 pages » (il y en a 29, mais le nom a resté), le document décrit les découvertes de la commission concernant des liens possible entre le Royaume de l’Arabie Saoudite et certains individus impliqués dans les attaques du 11 septembre.

Consultez l’article complet sur 45eNord.ca.

THE 28 PAGES & SAUDI-AMERICAN RELATIONS

August 1, 2016 § Leave a comment

Late last week, a mysterious portion of the Joint Inquiry into Intelligence Community Actions Before and After the Terrorist Attacks of the September 11th, 2001, which had been classified since the report’s release, was declassified following a two-year long declassification review. Know colloquially as the “28 pages” (the count is wrong but the name stuck), the document describes what the inquiry found regarding possible links between officials from the Kingdom of Saudi Arabia and individuals known to be involved in 9/11.

Read the entire article at Observatory Media.

Bypassing Bell Fibe FTTH Hub with Ubiquiti EdgeMAX Equipment

July 31, 2016 § Leave a comment

Whenever I have to interact with big telcos, I inevitably come to ask myself why they are still in business. It’s a wonder that companies that are so big and so dysfunctional on so many levels still have any customers at all. I’ve recently had to do an ISP switchover from dual Cogeco 100mbps over copper to a single Bell Fibe 250mbps line, and my experience was less than stellar. Apart from getting the usual “oh, we’re sorry, your line isn’t quite active yet on our end” not once, but twice after the install tech’s visit, their business technical support was entirely useless.

Sagemcom FAST4350

The dreaded Sagemcom FAST4350 aka Bell Hub 1000. Bell can connect to this to diagnose stuff apparently; you know what they say about back doors… Good riddance.

If you’re a Fibe customer, you probably know that Bell insists that you use their Hub 1000 or 2000 (their name for it, it’s actually a Sagemcom FAST4350 router) in your networking setup, regardless of the fact that as a business customer, you probably have something more suitable for the job. If you try to bypass it, then don’t hope to get any kind of technical support: the mindless automatons that they haven’t managed to outsource or replace with machines at their level one support will absolutely refuse to escalate your call or provide any sort of useful information. The Sagemcom has a bridge modem that can be triggered with button presses: if it’s bridging, they also won’t be supporting it. I’ve heard they have remote access to the router many times from many sources, and I would be tempted to believe it. Since I or my customer wasn’t about to let a 30$ piece of shit router with a back door be the weak link in our connectivity, we found a way to get it working with the Edgerouter we had on site, while completely bypassing the Sagemcom box.

aa

The Alcatel-Lucent ONT. This is obviously not optional.

The VLAN 35 / VLAN 36 trick is well known, albeit completely undocumented by Bell, and seems to work network-wide. Basically, traffic to the Bell ONT is divided into two VLANs, 35 for the internet traffic and 36 for the Fibe TV streams. Our use case only had an internet connection, so getting it to work was as simple as creating a VLAN interface on ethernet interface we used to connect to the Alcatel Lucent ONT, and creating a PPPOE interfaces within this VLAN. Once that’s done, enter your username and password for the PPP connection as provided by Bell, set the firewall rules for your connection and you’re good to go. The commands should look like this:

set interfaces ethernet ethX vif 35
     #creates VLAN 35 on interfaces ethX
set interfaces ethernet ethX vif 35 pppoe 0
     #creates PPPOE connection zero within the newly created VLAN
set interfaces ethernet ethX vif 35 pppoe 0 username thisisyourusername@bellnet.ca
     #self-explanatory
set interfaces ethernet ethX vif 35 pppoe 0 password thisisyourpassword
     #ditto
set interfaces ethernet ethX vif 35 pppoe 0 firewall in name WAN_IN
     #set firewall inbound firewall rules
set interfaces ethernet ethX vif 35 pppoe 0 firewall local name WAN_LOCAL
     #set firewall local rules
set interfaces ethernet ethX vif 35 pppoe 0 default-route auto
     #get your routes from your PPPOE connection

By default, your Edgerouter uses all the right options for connection to PPPOE properly, except for one. MTU was set to 1492 explicitly without my intervention, and name-server to auto also. This will allow you to connect and get an IP, however only certain parts of the internet will be accessible. I thought this had something to do with routes, but both a static interface default route and the PPPOE routes yielded the same results. Connection to Youtube and other Google sites worked great, but half the internet didn’t, with packets stopping 6 or 7 gateways down. Turns out MSS clamping is to blame, so you’ll need to set that in your firewall.

set firewall options mss-clamp mss 1412

Without this option, your packets can end up being too large and dropped by certain hops which enforce maximum MTU strictly. What’s funny is that MSS clamping is assumed if you use the wizards to define your WAN as PPPOE, but not one of the assumed options when setting up PPPOE from scratch. Since we were migrating from a load-balanced setup with DHCP being provided by modems, this was not present in our config.

Keep in mind that this will work for setups without Fibe TV only. For setups with it, you can follow the excellent instructions from here, with the added bonus that you can probably use software port bridging functionality in your Edgerouter to eliminate the need for a switch between your ONT and the Bell Hub. This is entirely untested though, so have fun at your own risk.

My primary frustration with Bell in this case, let’s say apart from having to go on-site three times to get a relatively simple job done, is that I’m pretty certain a more senior level 2 technician would have pointed to the MSS stuff in a matter of minutes; it’s just the kind of stuff you know when you work in that stuff all day. I know for a fact that there are some technically competent folks who work at or for Bell, because I’ve had great experiences in the past. I’ve assisted in some pretty hard-core roll-outs of Bell services to entreprise customers, and the techs there were present and helpful. Where they were not, a quick chat with the sales guy  could fix things very rapidly. The problem is that if you don’t have a sales guy, even if you’re a business customer, you have to navigate through the same byzantine system of call-in support as the rest of the plebs, wait a long time and hope for the best. If you’re lucky you’ll get a zealous one who’s willing to break SOP to make you happy; if you’re unlucky, you’ll get another drone who just reads the lines he’s supposed to.

Headless Steam In-Home Streaming, Pt 1: My Experience

July 28, 2016 § Leave a comment

For almost exactly three years now, I’ve been using a mid-2013 13.3″ Macbook Air as my primary machine. As I explained in a review which has now disappeared with the demise of Epinions, I didn’t expect the transition from an expensive gaming rig to a super-slim, barebones laptop to go as smoothly. The idea behind the move was partially to make myself incapable of gaming during University, and partially to have a single machine through which I would use for all my computing needs. I wanted a “single pane of glass”, as it were.

Sync solutions that aren’t cloud-based generally suck (although they’re getting better), and even when they don’t, there are obvious drawbacks. You know, things like having an internet connection for syncing to happen, or the device needing to be powered up. Controlling several machines in tandem also sucks; Synergy is a somewhat acceptable solution to doing this, but it’s one more piece of software to run, update, configure, and potentially troubleshoot. I wrote something on this in 2008 if you want to cringe. Sometimes, you just want to fire up a machine, open a file you’ve been working on from your desktop and put in some work, without thinking about it. That’s where having one, compact computer with a 10+ hour battery life is very appealing.

It’s obvious that the Macbook Air has limitations, and getting past those limitations will invariably require more hardware. I was completely kidding myself when I thought I would be making savings by going to a less powerful machine, and I’ve got several Us of rackspace to show for it. The real challenge isn’t really saving money, it’s integrating the hardware in a way which sticks to this idea of a single terminal to rule them all. Steam’s addition of In-Home streaming functionality is great for providing gaming in this way. Here’s my experience with getting the thing to run correctly.


My setup

As with any gaming setup, the sky is the limit as to what you build to use In-Home Streaming. My setup is relatively humble, a far cry from the flashy (and noisy, and expensive) rigs I was building before. Here are the specs.

Case: Cooler Master Elite 110
Power Supply: Corsair Pro Series AX650
Motherboard: Asrock H81-M ITX
CPU: Intel Core i7-4790K
Memory: Crucial Ballistix Sport 16GB
GPU: Zotax GTX 960 2G

This setup lives in a closet along with my other networking stuff, with only a network cable and a power cable going to it.


Common problems and fixes

The Steam KB covers setup and really common problems well, so I’m not going to bother sharing how to get the thing to work. What I will cover though are the problems that I have personally ran into. Most of them are related to running a machine without any input or output peripherals. Depending on how games manage mouse and keyboard output as well as video output, they may not like not having a physical inputs plugged in, or not being passed on a screen resolution by the OS because the screen is absent.

The most obvious need is for an emulated display. Being a long-time contributor to the Folding@Home project, I was well aware that nothing that uses a GPU would work without either a monitor or a dummy plug. The old-school way of doing this is using the DVI-to-VGA plugs that usually come with graphics cards along with some resistors to cook up a quick and dirty dummy plug; the folding community has used those for years, and they are known to just work. While the internet specifies 68 to 75 ohm resistors to get this working, I’ve done mine with 125 ohm resistors to the same effect. The new-school way would be to use an HDMI dummy plug, which has the additional advantage of emulating an audio output, which eliminates the need for more dongles and hacks.

Audio also needs to have a simulated output, because most audio chipsets theses days deactivate if no headphones or speakers are plugged into them. While I’m sure we all have spare sets of broken headphones around, neat freaks will likely want a cleaner solution. Most motherboard 3.5mm audio jacks have physical switches inside, so all you need is to physically stuff either a dummy plug or an actual, unwired 3.5mm plug and onboard sound will work. I’ve personally run into games which crashes and explicitly gave an audio related message. Some games might work, but you’re better safe than sorry.

Finally, mouse and keyboard input is also need locally for remote inputs to work properly. I’ve played through half of AC4: Blag Flag without the mouse’s scroll-wheel because using it would cause the game to infinitely scroll. This is reported to happen in several games by several editors on the Steam forums. Here again, having a mouse plugged in is an easy solution, but not quite as clean as one would want it to be. My solution was to use a Logitech Unifying adaptor, without the wireless mouse or keyboard connected, as it registers as both mouse and keyboard HID devices. It’s also very low-profile, which is perfect for this application. I’ve been looking around to find a way of emulating HID devices via software, but I have not yet found anything worth mentioning. From my time in big box stores, I knew for a fact that my local Geek Squad  would have, for no discernable reason, a buttload of orphaned receivers; that’s where I got mine. See with your local store if you don’t have a receiver handy. Otherwise, the receivers sell for about 10$ on Amazon.

Once all the inputs and outputs are taken care of, you should have minimized the possibility of games doing weird things on start-up. Then comes the optimization of the encoding for maximum performance and picture quality. From the get-go Mac users who have a Nvidia GPU on the backend are SOL when it comes to hardware-accelerated encoding, because of a known incompatibility. The performance is great, but you game will suffer from occasional color spots and artefacts, especially in dark scenes.

Examples of artefacts and other problems caused by Nvidia encoding when Apple hardware receives it.

Examples of artefacts and other problems caused by Nvidia encoding when Apple hardware receives it.

For my setup, I found that simply deactivating hardware encoding altogether makes for the most beautiful and smoothest gameplay. Intel iGPU mights seem like a good idea, because it offloads to hardware thanks to Intel Quick Sync Video, but I found it to be lacking, with the video output being quite laggy despite framerates staying high. I’m not sure that’s what Quick Sync Video was meant for; I’d stay away from it. If you have a modern CPU, chances are your bottleneck isn’t the CPU anyhow, so software encoding will probably do little to your experience unless you’re running a very CPU-intensive title. Steam forum posts seem to mirror my experience.


Caveats and important considerations

Despite the fact that this system works great for me, there are some things to be aware of. It works for me because I’m a filthy casual, albeit a very tech-enthousiast filthy casual. I only build a computer to play games when I’ve amassed too many from Steam Summer Sales to not play them. I’ve been gaming on wireless peripherals for a few years now, and I’m not going back. I don’t care too much about latencies, refresh rates and the like. If you’re a hardcore gamer, you probably hate the idea of streaming a game, because MUH LATENCY. If you’re a filthy casual of the non-IT aficionado variety, I’m surprised you’re still reading this. This brings me to what is probably the biggest caveat of In-Home Streaming: I feel like it doesn’t fit too many use cases yet, for a variety of reasons.

Firstly, you need fast networking. I’m talking wired, ideally gigabit if you want your games to run smoothly. Since most normie non-gamers’ idea of fast networking is likely router with “Wireless AC” written on it, it’s probably not gonna work for most folks. The convenience of wifi has long overtaken good old cables, and it’s a hard sell to ask users to drop the convenience and run cables.

Secondly, it’s not like gaming in front an actual display in terms of imagine quality. The video stream coming from the rendering side of the setup is compressed, and those of you with keen eyes will notice this in certain settings, especially if you’re running on a non-gigabit connection.

Thirdly, it’s still not seamless. Most games will work fine, even the ones that require a launcher or aren’t from Steam, but some titles inexplicably require some user intervention via RDP to get things to work. For example, after launching Watch Dogs, other Steam-enabled Ubisoft titles will no longer launch some times, requiring a restart of Uplay. Sticking to the same title is fine, but playing different ones will inevitable at some point cause hiccups which will require user intervention. It’s never something major, but it’s still a pain in the butt.

Finally, there is some of the dreaded additional latency, on input and display. You won’t be winning a CS:GO world championship on a stream of a remote computer. It’s only really noticeable on faster shooters, but it’s there, by the very nature of how it works. As video codecs get better and 10BaseT becomes ubiquitous, we’ll see this concern fading away, but for now we have to deal with it.


Conclusion

To sum it all up, while I don’t think the technology is quite mainstream ready, Steam’s In-Home Streaming is a step in what I see as the way of the future, that is decoupling a computing/gaming experience from the hardware required to provide it. Let’s face it, if you’re building your own computer, you probably can handle the additional legwork of configuring and maintaining a streaming box. If on top of that you don’t mind making minor concessions on performance for the sake praticality, I’d strongly recommend you look into it. For my, having a totally silent work environment alone is well worth it.

In part 2, I’ll go over what I’d like my next Steaming box to look like, and what steps I intend to take to integrate In-Home Streaming functionality to my existing server setup.

Free Dakota: A Review

July 9, 2016 § Leave a comment

Let me start this off by stating that I’m probably not the good person to review fiction. Following my review of The Free Market Existentialist, Dr. Irwin offered to send me a review copy of his recently released Free Dakota, and I gladly accepted after highlighting the fact that I hardly a literary savant. Reading non-fiction has the unfortunate opportunity cost of not affording me to read as much fiction as I would like, as a result I would hardly consider myself well-read in fiction in any of the languages I know how to read. Knowing this, take this review with a grain of salt.


Not unlike how the key to any good meal is balance, the obvious dilemma in creating any work of fiction with an overt political message is knowing how to properly blend in the message to the plot without leaving too strong an aftertaste. Stating political opinions or preferences in argumentative form is one thing, but using characters as a mouthpiece is a whole other undertaking and definitely has the possibility of back-firing. Thomas More’s Utopia is one work that crosses the line, an example of how slathering normative preference in a thin layer of fiction doesn’t make it any more palatable. Raphael describes what More would not, and the fictional background seems of little use other than presuming the existence of superior moral beings in the form of the Utopian people. In many ways, the Utopia felt more like reading a Wikipedia page on a non-existent country than following characters on their journey. I read enough non-fiction that when I do start reading something fictional, this is hardly an impression I want to have.

From my reading of William Irwin’s Free Dakota, it is quite obvious that the author is well aware of this risk, and has taken many steps to mitigate it. For one, characters both present in dialogue and implied are many, and each one of any significance is presented with all its qualities, quirks and flaws. The use of braided plots serves to strengthen the story, by both diluting the more “theoretical” parts and uncovering intrigues which are bound to happen in any political process. The result is an honest effort at story-based advocacy, that for all it’s merits, still hasn’t quite nailed the balance. The book still has some merits in terms of it’s power of conviction and it’s ability to plant the seed of ideas, but the ideological content is, according me to, still too overt to make it a mass-market success.

Free Dakota is the story of separatist movements. The main protagonist Don encounters two such movements, one in Vermont which he quickly shuns for it’s collectivist tendencies. Later comes a similar effort in North Dakota, a more an-cap collective which grows spontaneously but is rapidly confronted with the reality of politics. With unlikely collaborators, Don rapidly goes from venting his frustrations on blog posts to doing what is necessary to win hearts and minds for his cause. His ragtag entourage, hailing from all walks of life, navigates through conspiracies and power-struggles with a gamut of state- and non-state actors in the pursuit of making North Dakota a beacon of freedom in the United States.

Overall, I saw the book as an allegory for an idea that many defenders of political ideologies, irrespective of their proclivities, tend to forget: politics is in its essence compromise. In an age of Facebook meme wars, niche ideologies have found spaces to grow their following, at the cost of becoming echo-chambers reinforced by content-curation algorithms. Online, compromise is easily recognized as weakness, and granting a point to an opponent is grounds to be considered either defeated or treasonous. Purists have little to lose if not their spare time in defending ideological zealotry, and the impersonal interactions through text make discarding inconvenient rebuttals too easy. In real-life institutionalized politics, hardliners lose. First and foremost for the obvious reason that fringe groups never have the mass appeal to set the political agenda. Secondly because real-world discussions and debates are rarely 100% oppositional and require common ground, which both parties are incentivized to recognize, lest they sound unreasonable to the masses. Thirdly because politics is a messy power-struggle, one in which even the best of plans always get sidetracked.

Despite the idealism that Don shows through his blog posts throughout the novel, the series of events surrounding the pursuit of Free Dakota follows a much more sinuous path. This paths seems to suggest the inevitable fallibility of “pure” libertarianism or anarcho-capitalism, showing some sort of ultra-minimal government as a more likely outcome of a libertarian revolution. In his prior work which I reviewed, Irwin manifested a decidedly minarchist conception of libertarianism, so this comes at no surprise. While it might fall short of pressing all the feel-good buttons of convinced Libertarians, I think that avoiding the trap of describing a utopia sends out a more convincing message to fence-sitters: tasting the fruit of freedom will come at a price.

Being a political scientist from Quebec, the subject of separatism and sovereignty certain struck a cord, but not for the reasons you might expect. Here in Quebec, popular support for state-led cultural imperialism and statism for nationalistic ends is precisely what allows repressive policies, a situation hardly comparable with the Free Dakota movement in Irwin’s book. For American libertarians and constitutionalists who see their federal government as Goliath and states as their David, this story will surely resonate. It’s the story of the revolution all again, the little folk sticking it to the proverbial man in search of liberty. I kept seeing the negative: the imperfection of the pursuit of sovereignty at a smaller scale which still impedes on the ultimate sovereignty, that of the individual. The referendum for Free Dakota passes, but what of the voters who did not consent to secession? They are victims of mob-rule democracy just like libertarians feel they are now. The story doesn’t say what happens to them. The situation is imperfect, but that’s precisely the point, I think: since we are condemned to choose, in life as in politics, we might as well give ourselves as many options as possible. Secession and the founding of Free Dakota isn’t a perfect end in itself, but rather another political option, a way to opt-out of lesser options. Here again, but idea of compromise and the lesser of evils is very present.

If I’m perfectly honest, this is probably not a book I would have picked up for a pleasurable read: the normative charge is apparent from very early on, and Don’s ramblings with himself, on his blog and in dialogue are sometimes a bore if what you’re looking in a novel is escapism. Despite the fact that they lay down ground rules that might be helpful for initiates, it does distract from the plot and upset the book’s balance. This didn’t pose enough of a problem to keep me from reading the book in one sitting however. At under 200 relatively light pages, it’s an easy read, something I’d feel comfortable putting in anybody’s hands as a primer to libertarian thinking. As Irwin’s first effort in fiction, I think the result is good, and I’m eager to read more of his output: practice makes perfect.

You can buy Free Dakota from Amazon, and I also recommend that you check out Dr. Irwin’s recent commentary on Brexit which ties the event in with the content of the book.