In Defense of Political Radicals

January 11, 2017 § Leave a comment

Sometimes, life imitates shitposting. Under the guise of conducting research, I’ve come to join and like an absolutely inordinate amount of politically-inclined shitposting groups and pages. It’s bad, to the point where normie posts about meals, trips and life achievements have been completely drowned out in political compass memes and ancap smileys. Right around the time where radical centrist memes began making an appearance in my favourite meme group, I became aware of a piece by Hunter Maats which perfectly represents everything that’s wrong behind the idea that inspired the meme.

This is what Hunter looks like. See the original tweet in context.

This is what Hunter Maats looks like. See the original tweet in context.

In his extremely verbose article, Maats perform an intricate kata of mixed mental arts, chopping down conveniently constructed straw men built up in the shape of Tom Woods, intended to be a sit-in for the entire ideology of anarcho-capitalism. Between ad-homs (intended as bait?) and masturbating about how widely he has read, the author does a pretty shitty job of explaining to us why and how anarcho-capitalism sucks, relying mostly on choice arguments such as MUH HUMAN NATURE. I’m not going to white-knight for Tom, simply because he doesn’t need my help. I’m not going to refute the piece too, because the concerns Hunter has with anarcho-capitalism have been addressed before. Maybe he hasn’t read widely enough. What I am going to do is point out how the underpinning of Maat’s “arguments” are essentially the same as those upon which the states builds it’s own perverse logic, in that, hopefully make a sound defense of political radicals, a group in which I include libertarians.

What aggravates libertarians more than anything else about the state is its hubris: how a small group of elected officials, upon manning the controls of apparatus of the state, suddenly come to think of themselves as gods, who with sufficient time and budgetary appropriations can solve everything. Functionaries of the states and their elected political overlords have different interests, but the latter coincide when it comes to making the state look like an all-knowning entity with the power to fix everything. We can make the Middle East democratic. We can save people from their self-destructive habits through nudging and subsidies. We can engineer our way into permanent economic growth. We can provide everything for everyone forever.

This hubris of the state is also that of the technocrat, and apparently, Hunter Maats. While reading his piece, I got the impression that Maats’ point was that if only everybody read as much New York Times best sellers as he did, we could finally move towards an improved society; if only everybody accepted consensus opinions on subjects like “human nature”, we’d have a solid foundation upon which we can build a better society. In doing this, Hunter commits what should amount to a cardinal sin to anybody who believes in science: he behaves as if he had found pure, eternal, objective truth between the covers of paperbacks. His pretension to be able to build politics upon a solid foundation is nonsensical:  firstly because it is epistemically unsound, and secondly because understanding politics as science is absurd*.

Modernity was built on doubt; such a well read person as Maats aught to know this. Descartes methodological doubt constitutes the basis for modern science, and thinkers like Karl Popper who have built upon it have shed light upon one of the foremost criteria of truth: falsifiability. In the face of science, the  soundest of knowledge is to be questioned, all fact is provisional by definition. Men and women of science aught to be humbled by this sword of Damocles inherent to the process of discovering and rediscovering truth. Knowledge is a shaky edifice by virtue of the fact that it’s building-blocks are subject to crumble at any time. When he makes cocksure pronouncements on the existence of an essence of man or processes of evolution on the scale of universe, what Maats is doing isn’t science; he’s simultaneously praying at the altar of science and desecrating it with his arrogance.

Ironically, the Austrian school of economics, the works of which Maats refers to as mental masturbation, does a much better job at being humble in it’s view of the science of economics. To be sure, the Austrian methodology could be critiqued, but its insistance that economic preferences are subjective and largely impenetrable are a sign of committal to the idea that some things simply can’t be perfectly revealed through science.

So, one could say that Maat’s refutation of anarcho-capitalism in favour of governance based on science is significantly weakened by the fact that he fails to acknowledge that every bit of knowledge he builds his case on could be proven to be completely false. What completely demolishes his case against Woods / libertarians / ancaps is his complete lack of understanding of what this person and groups are about fundamentally.

The libertarian mouvement is a political one. This implies that it interacts with other mouvements, actors and institutions that are contemporary to us; it attempts to influence the status quo in a manner coherent with it’s founding principles. Hunter portraits the movement as striving for the establishment, ex-nihilo, of a stateless society; this is of course ridiculous, because politics is by it’s very nature a process. How did we get here, politically? According to historians and scholars on nationalism like Benedict Anderson, the state was formed around the state-sponsorship of an official culture through things like language and education, progressively asserting itself as a incarnation of the collective will which bore much more legitimacy than that of divine-right monarchs. Democracy is a relatively recent ideal which best channels this ideal of legitimacy, and which has been proven to be robust, if imperfect. There is no pivotal moment where our current institutions magically appeared. There is no teleological “march of progress” leading up to democracy and the modern state. What we have before us is the results of human action, the slow process of humans shaping their environments through interacting with one another.

We libertarians, the political ones at least, wish to participate in this process for the betterment for our society. We are well aware of our institutional surroundings, as demonstrated by our eagerness to criticize the state, its innate immorality and spectacular shortcomings. I know of nobody who wishes to achieve the libertarian ideals while ridding ourselves of what good things we have invented collectively (the idea of courts and arbitration of disputes, social institutions, and even *GASP* roads!), but this does not mean we do not carry a utopian vision of what we would want society to look like. This utopia is a guiding light, in the same way the sun, moon and stars can be used for navigation despite being out of reach. If politics is constantly in movement and evolution, this ideal is the direction of our vector, not a mere point.

In a sense, Maats is attempting to discredit an inherently political concept, the ideology of libertarianism, in strictly apolitical terms. In his article and follow-up tweet to me, he makes it sound like being political is a bad thing, and that science alone can yield a better future. Is he forgetting that the Kants and Rousseaus that he likes to namedrop where also guided by an ideal, and imminent political? Is he denying that their ideas had an influence on the foundation of our societies, despite their respective utopias never having been achieved? In citing Diamond, Pinker, et al as “evidence” (his words) of some sort of human nature and transcendant process towards progress, he discards the role of human action in the shaping or our world. The thinkers, the people that their ideas sway, the practitioners of politics and the masses they mobilized, the groups that coalesced around this or that aim or objective, they are the real force behind the creation of our societal order. The very idea of the contemporary nation-state that is so dear to Hunter was borne of an assembly of myths, from the social contract to the “collective will”. What is to say that ideas can’t change the current order of things?

In his attempt to evacuate politics in order to assume the stance of the “realist” (again, his words), Maats dons the robes of the soul-less technocrat, eager to engineer a better future but all too quick to forget the human beings that will inhabit it. Like the state, he adopts a quasi-religious belief in the “just a bit more effort” mentality and carefully plans a map to the exact location of what turns out to be a mirage of brighter tomorrow. In the end, the only place he’s bound to end up after much adjustment, readjustment and sensible choices is further down the road to serfdom. The real enemies of human progress aren’t on the ideologues on the left or the right, who dare imagine a different world. The real threat, if one indeed does exist, are radical centrists like Maats, who look to smother the act of politics in what they perceive to be truth.

Radicals, keep doing what you do, your ideas and actions are the engine of history. Hunter, once you’re done beating off to best-sellers, you can come out and play too.


*This claim might sound weird coming from a political scientist. In the french language, there exists a distinction between la politique and le politique. The first refers to the conduct of politics, the interaction between political actors (campaigning, coalition forming, media ops, etc), while the second refers to the abstract notion of the affairs of the state and governance. Here, I am referring to la politique. To be sure, my discipline, in so far as it concerns the analysis of public policy, voter behaviour and scholarly analysis of political situations is absolutely a science. When discussing ideology however, this is not what we’re talking about.

Home Network Segmentation, NAT Loopback to VLAN on Ubiquiti Unifi Gear

November 4, 2016 § Leave a comment

I’ve been putting off segmenting my network for a while now, but the recent IoT botnet powered DDoS has bumped the task up my list of priorities, and I finally got around to doing it. Generally, if your network is anything other than non-critical clients accessing the internet, that is to say if you have any sort of IoT devices or it you host any internet-facing services at home, it’s probably a smart thing to split up your network into segments. Doing so allows finer-grained control over which machines can talk to each other, thus enhancing security. A segmented network is usually also easier to survey and audit, because irregularities like “why the hell is there an Acer laptop in my server segment?” stand out more, and with the appropriate monitoring solutions you can more easily generate usage stats by just running queries for an entire segment.

Let’s use the IoT botnet situation as a practical example. If your chinese-made, off-brand IP camera system was rooted, we now know that it has the potential to take part in taking down a third of the internet. But being that a third party has complete control over the device, there’s no telling what what it could do, including spread to other machines on your network, steal some files off an improperly secured network share, etc. Not being able to shitpost to Facebook is one thing, but messing around with my self-hosted stuff and data is where I draw the line.

There exists several models for how to segment networks, but this is one of those things that is really an exercise in common sense. A military buddy once evoked a three tier system that’s the standard for military IP networks; that’s probably great for their use case, but your home isn’t a forward operating base in the horn of Africa, and the military probably doesn’t have the likes of Nests and Augusts in their networks. Thanks to the magic of VLANs, you aren’t bound to a specific number of segments, and you’re probably better off doing more than not enough, within reason. Here’s what my setup looks like. 


It’s a pretty simple setup, but since the pictures are not really representative, here’s some more explanation.

General access: This is the untagged network that anybody who connects via wifi or wired connection gets assigned to without further configuration. This is the least trusted segment, in that it has limited access to the others, but it’s also protected from the other segments.

Web-facing services: Self-explanatory. This is completely isolated from all the other segments, both in and out, because clients who normally just connect through WAN. There is an exception for local clients trying to connect from General access and Network managment segments, which get NAT’d back in, but those exceptions are made for application specific ports only and everything else can be firewalled out. For Unifi gear, everything needs to talk back to the controller for your network to work, and if that controller is local, you’ll need to punch a few holes in your firewall. More on this later.

Network management: This is for the administration interfaces of switches, routers and access points, in addition to IPMI and other remote KVM options. In a Unifi environment, nobody should be talking to the APs or the switch, so can normally be isolation. A per-host exceptions can be made if you want to access those devices, most likely an IPMI console, with your main machine. Seeing that IPMI practically gives the next best thing to physical access to your systems and has been proven to be a wide-open opportunity for attack, it stands to reason to reason that this segment should be as strongly sealed up through your firewall policy. Note that for now, Unifi APs do not support being managed on a tagged VLAN, although the feature is supposed to be in the works. My own Unifi switch did not have this problem with the latest firmware.

Home Automation: Since home automation products usually only talk to the internet anyways, this is another locked-down segment that can’t speak to anybody. Where necessary, an appliance that needs access to more “commercial-grade” products that actually allow interaction from the LAN can be assigned an address within the VLAN. While it’s not deployed right now, a virtualized machine running Home Assistant would be interesting.

Building a segmented network with a Unifi gateway as your router is a bit different from what could be done on other platforms, since the incomplete GUI controls don’t offer all the options necessary to fine-tuning your setup. The major annoyance is that NAT loopback (aka hairpin or reflection) doesn’t seem to be properly implemented. Port-forwarding via the USG configuration menu works when accessing from the internet, but loopback config seems to assume that you will only be forwarding ports to a single subnet, and hence only need loopback to and from this subnet. We’ll need to fix this.

Unifi USG port forward dialog. Simple enough.

Unifi USG port forward dialog. Simple enough.

I found the fix for this while reading a date how-to for EdgeMAX products. Once you’ve configured your port forwards, you’ll have to manually setup NAT masquerading to the hosts that will be receiving the forwards for loopback to function correctly. There are no means of settings these things up in the GUI, so you’ll have to use the old config.gateway.json trick to manually input some NAT rules.

I strongly recommend learning how to program Edgemax / Unifi gear, because it’s great help in understanding how to modify the config file, but in case you’re in a hurry, here’s a sample of the JSON you’ll need to add.

     "service": {
          "nat": {
               "rule": {
                    "7001": {
                         "description": "Hairpin NAT Transmission",
                         "destination": {
                               "address": "",
                               "port": "9091"
                         "log": "disable",
                         "outbound-interface": "eth1.121",
                         "protocol": "tcp",
                         "source": {
                               "address": ""
                        "type": "masquerade"

Replace the description, destination address and port, source address range, outbound interface (mind the VLAN!) and you are good to go. You will need one rule for every host. This JSON was built assuming you have nothing in your on-controller config file, do respect the hierarchy if you have other things going on in there, and ALWAYS validate your code.

You’ll need a rule for every port forward you want accessible from your LAN. It might be possible to define those in one shot by defining several destination addresses and ports, but I have’t tested this.

In the event that the destination port on your forward is not identical to the incoming port, you’ll want to configure the inside port in this masquerade for it to work. Be careful, loopback traffic is not exempt from passing through your LAN-IN firewall rules, so you’ll need to configure exemptions to let that through.

This is what my LAN-IN firewall config looks like. Note the exemptions: 1) for NAT loopbacks, 2) for the switch on the management segment, which still needs to talk to the controller which has not yet been migrated from my general access segment. Rule 2003 is disabled for the same reason.

This is what my LAN-IN firewall config looks like. Note the exemptions: 1) for NAT loopbacks, 2) for packets coming back on active connections to services available through loopback connections and 3) for the switch on the management segment, which still needs to talk to the controller which has not yet been migrated from my general access segment. Rule 2004 is disabled for the same reason.

The content of Web-Service-Loopbacks group. Note that I used hosts, not the entire segment, which would negate my isolation rules.

The content of Web-Service-Loopbacks group. Note that I used hosts, not the entire segment, which would negate my isolation rules.

That’s it! While you’re doing the legwork of editing the JSON file, you might want to do things like disable SSH password authentication…

     "service": {
          "ssh": {
               "disable-password-authentication": "''"

…and configure RSA login.

     "system": {
          "login": {
               "user": {
                    "admin": {
                         "authentication": {
                               "public-keys": {
                                    "user@hostname": {
                                         "key": "aaaaabbbbbcccccddddd",
                                         "type": "ssh-rsa"

That’s a pretty good start to a robust and secure home network. Let me know if I missed anything.

The Internet of Things Needs Integrators, Not Gadgets

October 18, 2016 § 1 Comment

Crowd-funding and the sharp drop in development costs for internet connected device has given us both an onslaught of useless gadgets, and the much overused buzzword “Internet of Things”, or IoT. The concept has been pushed way past the borders of absurdity, as highlighted by the likes of @InternetofShit. I’m only half surprised: personal computing has plateaued, with many of the “traditional” challenges posed by hardware such as transistor and storage density being more of less solved, what more is there to innovate in? To break the “sit down in front of a screen” paradigm of computing, the industry is shooting in all directions to find new ground. This spray-and-pray approach to innovation is surely finding new ways to do things, but in the frenzy of it, we are offered truly senseless gadgets, from cloud-connected candles to bluetooth kegel exercisers.

All these gadgets are of course intended to be mass-marketed to the public at large, people who are in the aggregate are almost completely technologically illiterate. I’d bet that most of the time, they are used with whatever app they are shipped with, and seldom integrated into other systems through platforms like IFTTT. Because they have to just work even for the least tech savvy, several shortcuts are taken in the design of these gadgets: more often than not they are battery powered, and work in a way which handicaps their usability for more advanced user. They are almost never meant to have a lifetime that exceeds a few years, at most. Almost all the time, they are dependant on some online service running in the cloud, which is susceptible to be hacked, taken down due to an outage, or just plain shut down when the gadget-maker inevitably goes bankrupt or disappears in the fog like so many other tech companies.

All the while this is happening, people buying the gadgets with the intent of having technology make their lives easier are connecting all those devices to 50$ routers which are generally both unreliable and insecure. They pipe enormous amounts of data, some of it sensitive in nature, to servers hosted who knows where. Home automation is on everybody’s lips, but houses are still being built with a basic run of RG6 and a few phone lines as the only data cables present within their walls. Technology as a whole is ubiquitous, but no one technology has gained the traction necessary to signal a significant change; one might have a smart lock and the other a few smart lights, but only the biggest nerds have integrated these things into systems which ACTUALLY make day-to-day living easier or more enjoyable. Absolutely none of these piece-meal IoT innovations has had the impact on the population at large of, say, the electric refrigerator or the good old personal computer. I believe the IoT will remain but a buzzword until the industry leaps over this hurdle and brings forth something that becomes as widespread in it’s adoption.

I feel that home automation is the place where the IoT really has the potential to shine. It’s completely abhorrent that technology we use to interface with the functionalities of our living spaces have been largely unchanged for the past century. With the current concern for making dwellings more energy efficient, I think the time is ripe for more widespread adoption, but certain things have to change. I believe that for IoT to gain traction, particularly in the home automation sphere, certain things need to change; some things need to stop, and others need to start happening more often.

STOP MAKING YOUR DEVICE RELIANT ON THE CLOUD. Off-loading compute and management to the cloud is a smart thing to do, for all sorts of reasons which I won’t bother listing. Most users don’t mind sending massive amounts of data to the cloud, and those that do are generally the type of people to homebrew their own automation solutions, so privacy concerns are not a huge deal. Where relying on the cloud gets annoying is when it, or your access to it, fails. The PetNet fiasco is a good example of this. It’s horribly bad design to rely on inputs from the internet to have things work, and even in 2016, nobody should ever be assumed to have a 100% uptime internet connection. Ubiquiti has this figured out for their wireless access points. They employ a provisioning process that stores configuration locally on the devices, expects regular pingbacks from them, but works just fine when the controller is absent for whatever reason, even when the units get powered down. I’ve had access point setups run without a controller for MONTHS without affecting the product’s foremost functionality, providing wifi.

START GIVING THE OPTION OF NOT USING THE CLOUD AT ALL. In a perfect world, everything would be open source and I could run a copy of Nests backend on my servers if I wanted to. In this imperfect world which is ours, this is not an option. That doesn’t mean that users shouldn’t be given the choice to opt-out of the cloud entirely, and use the array of sensors onboard the device in different ways. Perhaps this is too much of the Apple-championned “protected ecosystem” idea permeating the tech sector at large, but making cloud-management a non-negociable part of the product is surely preventing a lot of these IoT companies from moving units.

Sure, there’s an API for everything, but the fact that I have to call a server on the internet to interact with my thermostat is totally non-sensical. That’s not to mention that API access usually can’t interact with the sensors directly. I’d love to be able to save myself a PIR install and use the presence sensor on my Nest products directly, but sadly, that’s not an option. How hard can it be to have some way of querying devices, via network or otherwise, to use the raw input they can provide?

It’s a matter of time before we start seeing people have their expensive gadgets turn to to paperweights because the company providing the backend goes tits up. By enabling direct-to-device interaction, we can avoid this.

STOP MAKING EVERYTHING WIRELESS. I’m talking for both power and data. While the evolution of ICs and battery tech has us changing batteries less and less often, it’s still an ENORMOUS pain in the ass to change batteries. There’s a reason why fire departments literally have to go door to door to remind people to change the batteries in their smoke alarms. Power redundancy: YES! Battery-only power that makes me buy obscure-sized coin cells every 18 months: HELL NO!

Wireless for data is obviously a go-to for IoT companies because it removes an important barrier to entry for the consumers, who can just takes stuff out of the box, plug it in and enjoy. This is fine if you used RGB lighting as a party trick to impress guests in your living room, but it’s very annoying if you have stuff throughout the house that need to talk to Zigbee-Wifi interface boxes. Soon, you find yourself with Zigbee relays plugged in everywhere, which is an eyesore. I can’t really comment on the security and interference implications of automating an entire house on wireless, but generally, it seems that using whatever wireless protocol is just a lazy workaround to using cables, which are more secure, resilient and effective at transmitting both power and data.

We live in a time where 4-pair network cable can transmit incredible amounts of data all the while powering a small TV, and the IEEE keeps developing on this technology. Start thinking about how you can leverage this.

START EMBRACING PERMANENT INSTALLS AND LONG LIFE CYCLES. People buy homes for years, decades. If you intend to make a product which makes this home better, it should have have a life-cycle that makes sense on the scale of that which it’s going in. Support your products for long enough to make permanent installs a likely consideration, and find new ways to monetize a longer relationship with your customer-base. By all means, innovate and disrupt and do all those things that you startup nerds are up to, but don’t forget the people who bought your product because it solves one of their problems in favour of the fad followers who react strongly to hyped-up release announcements. Think cross-compatibility (of brackets, connectors, wiring) and upgradability (of software, hardware components where applicable). A product that compliments a home shouldn’t be sold like a phone or a smartwatch.

This is wishful thinking, because honestly I don’t think anybody is in the business of making durable goods anymore. It’s a shame, especially considering the recurrent revenu potential of IoT devices in the form of services that compliment the device.

START SUGGESTING HALF-DECENT NETWORKS TO YOUR CUSTOMERS. Defence wonks how work on military drones believe they should be used as a tactical tool for strategic ends, and laments how politicians just use it as a cool toy whenever convenient. I think IoT has the same problem: IoT devices should be considered part of a larger infrastructure, and designed to fit in a well-built core network, not standalone solutions.

This goes back to “stop making all the things wireless” point of course, but I’d really want to see the industry push for a wholistic approach to making your home smart, starting with the basics: a decent routeur, correctly configured and layed-out access points, a hardwired network which is well integrated to the building and a proper rackmount cabinet where everything terminates cleanly. Some will say I’m a gear slut, and to that I will plead guilty as charged, but hear me out. I’m not asking for a 42U cabinet in every home. I’m asking for a semi-standardized way of laying out the vital components of a smart home which makes upkeep easier.

Networking equipment aught to be suggested as an option to owners building new homes; so much of our lives is spent interacting with internet-connected devices, it’s completely ridiculous to still have half-assed retro-fitted solutions providing this vital connectivity.

This will be a hard transition to make because of resistance from both consumers and IoT manufacturers, but in the long run, developing infrastructure within homes onto which companies can develop will assuredly enable additional innovation in automation technologies. Tech people, make a consortium or whatever to push this agenda, and you’ll get to open up new avenues for your business in addition to creating new positions with cool names that you can put on your business cards.

START CHAMPIONING THE EXPERTISE OF INTEGRATORS. This ties in closely with all previous points. No amount of ease of installation and 3rd party integration capabilities will replace well-thought out integration of automation solutions into people’s dwellings. No matter how beautifully designed your product is, ideally it should be (in most cases) entirely hidden and require as little user intervention as possible. People want things that JUST WORK, and I’d argue that many times they are willing to pay people to get their stuff setup to just work.

Entrepreneurs are going to have start offering those services, but there are things that manufacturers can start doing to help a network of home automation integrators appear. Define installation norms, collaborate with your industry peers all the while doing it, and start working on certification and installation referral programs to legitimize the people that are willing to do the legwork in showing the masses what your products can do. Nest has this kind of program, and it’s an honest attempt at getting something like this going. In the end, integrators who know what’s out there and what can be done have the potential to generate sales and create loyal customers. Additionally, they provide priceless technical feedback and possibilities for large-scale beta-testing of new products in real use cases. That’s stuff you usually don’t get when interacting directly with the customer.

The takeaway of this post is that I feel that somehow, companies that bring IoT products to market in the home automation sphere need to stop trying to push units through big-box retailers and design products that are meant to be long-term investments just like air-exchangers, heat-pumps, furniture and large appliances. Give us less gadgets, and more solid solutions to actual problems. Give us less Bluetooth, more PoE ethernet, things that integrate to a house, not a phone. Don’t just offer a cloud API, offer more complete access to your devices themselves, for users which want to use your products as products, and not as services. Maybe then, we can start seeing truly smart homes.

Les «28 pages» et leurs significations pour les relations américano-saoudiennes

August 16, 2016 § Leave a comment

Il y a quelques semaines, une mystérieuse portion de la Commission jointe sur les actions de la communauté de renseignement avant et après les attaques terroristes du 11 septembre 2001 (en anglais, Joint Inquiry into Intelligence Community Actions Before and After the Terrorist Attacks of the September 11th, 2001), a été déclassifiée après deux ans de procédures de révision. Communément connues sous le nom de « 28 pages » (il y en a 29, mais le nom a resté), le document décrit les découvertes de la commission concernant des liens possible entre le Royaume de l’Arabie Saoudite et certains individus impliqués dans les attaques du 11 septembre.

Consultez l’article complet sur


August 1, 2016 § Leave a comment

Late last week, a mysterious portion of the Joint Inquiry into Intelligence Community Actions Before and After the Terrorist Attacks of the September 11th, 2001, which had been classified since the report’s release, was declassified following a two-year long declassification review. Know colloquially as the “28 pages” (the count is wrong but the name stuck), the document describes what the inquiry found regarding possible links between officials from the Kingdom of Saudi Arabia and individuals known to be involved in 9/11.

Read the entire article at Observatory Media.

Bypassing Bell Fibe FTTH Hub with Ubiquiti EdgeMAX Equipment

July 31, 2016 § Leave a comment

Whenever I have to interact with big telcos, I inevitably come to ask myself why they are still in business. It’s a wonder that companies that are so big and so dysfunctional on so many levels still have any customers at all. I’ve recently had to do an ISP switchover from dual Cogeco 100mbps over copper to a single Bell Fibe 250mbps line, and my experience was less than stellar. Apart from getting the usual “oh, we’re sorry, your line isn’t quite active yet on our end” not once, but twice after the install tech’s visit, their business technical support was entirely useless.

Sagemcom FAST4350

The dreaded Sagemcom FAST4350 aka Bell Hub 1000. Bell can connect to this to diagnose stuff apparently; you know what they say about back doors… Good riddance.

If you’re a Fibe customer, you probably know that Bell insists that you use their Hub 1000 or 2000 (their name for it, it’s actually a Sagemcom FAST4350 router) in your networking setup, regardless of the fact that as a business customer, you probably have something more suitable for the job. If you try to bypass it, then don’t hope to get any kind of technical support: the mindless automatons that they haven’t managed to outsource or replace with machines at their level one support will absolutely refuse to escalate your call or provide any sort of useful information. The Sagemcom has a bridge modem that can be triggered with button presses: if it’s bridging, they also won’t be supporting it. I’ve heard they have remote access to the router many times from many sources, and I would be tempted to believe it. Since I or my customer wasn’t about to let a 30$ piece of shit router with a back door be the weak link in our connectivity, we found a way to get it working with the Edgerouter we had on site, while completely bypassing the Sagemcom box.


The Alcatel-Lucent ONT. This is obviously not optional.

The VLAN 35 / VLAN 36 trick is well known, albeit completely undocumented by Bell, and seems to work network-wide. Basically, traffic to the Bell ONT is divided into two VLANs, 35 for the internet traffic and 36 for the Fibe TV streams. Our use case only had an internet connection, so getting it to work was as simple as creating a VLAN interface on ethernet interface we used to connect to the Alcatel Lucent ONT, and creating a PPPOE interfaces within this VLAN. Once that’s done, enter your username and password for the PPP connection as provided by Bell, set the firewall rules for your connection and you’re good to go. The commands should look like this:

set interfaces ethernet ethX vif 35
     #creates VLAN 35 on interfaces ethX
set interfaces ethernet ethX vif 35 pppoe 0
     #creates PPPOE connection zero within the newly created VLAN
set interfaces ethernet ethX vif 35 pppoe 0 username
set interfaces ethernet ethX vif 35 pppoe 0 password thisisyourpassword
set interfaces ethernet ethX vif 35 pppoe 0 firewall in name WAN_IN
     #set firewall inbound firewall rules
set interfaces ethernet ethX vif 35 pppoe 0 firewall local name WAN_LOCAL
     #set firewall local rules
set interfaces ethernet ethX vif 35 pppoe 0 default-route auto
     #get your routes from your PPPOE connection

By default, your Edgerouter uses all the right options for connection to PPPOE properly, except for one. MTU was set to 1492 explicitly without my intervention, and name-server to auto also. This will allow you to connect and get an IP, however only certain parts of the internet will be accessible. I thought this had something to do with routes, but both a static interface default route and the PPPOE routes yielded the same results. Connection to Youtube and other Google sites worked great, but half the internet didn’t, with packets stopping 6 or 7 gateways down. Turns out MSS clamping is to blame, so you’ll need to set that in your firewall.

set firewall options mss-clamp mss 1412

Without this option, your packets can end up being too large and dropped by certain hops which enforce maximum MTU strictly. What’s funny is that MSS clamping is assumed if you use the wizards to define your WAN as PPPOE, but not one of the assumed options when setting up PPPOE from scratch. Since we were migrating from a load-balanced setup with DHCP being provided by modems, this was not present in our config.

Keep in mind that this will work for setups without Fibe TV only. For setups with it, you can follow the excellent instructions from here, with the added bonus that you can probably use software port bridging functionality in your Edgerouter to eliminate the need for a switch between your ONT and the Bell Hub. This is entirely untested though, so have fun at your own risk.

My primary frustration with Bell in this case, let’s say apart from having to go on-site three times to get a relatively simple job done, is that I’m pretty certain a more senior level 2 technician would have pointed to the MSS stuff in a matter of minutes; it’s just the kind of stuff you know when you work in that stuff all day. I know for a fact that there are some technically competent folks who work at or for Bell, because I’ve had great experiences in the past. I’ve assisted in some pretty hard-core roll-outs of Bell services to entreprise customers, and the techs there were present and helpful. Where they were not, a quick chat with the sales guy  could fix things very rapidly. The problem is that if you don’t have a sales guy, even if you’re a business customer, you have to navigate through the same byzantine system of call-in support as the rest of the plebs, wait a long time and hope for the best. If you’re lucky you’ll get a zealous one who’s willing to break SOP to make you happy; if you’re unlucky, you’ll get another drone who just reads the lines he’s supposed to.

Headless Steam In-Home Streaming, Pt 1: My Experience

July 28, 2016 § Leave a comment

For almost exactly three years now, I’ve been using a mid-2013 13.3″ Macbook Air as my primary machine. As I explained in a review which has now disappeared with the demise of Epinions, I didn’t expect the transition from an expensive gaming rig to a super-slim, barebones laptop to go as smoothly. The idea behind the move was partially to make myself incapable of gaming during University, and partially to have a single machine through which I would use for all my computing needs. I wanted a “single pane of glass”, as it were.

Sync solutions that aren’t cloud-based generally suck (although they’re getting better), and even when they don’t, there are obvious drawbacks. You know, things like having an internet connection for syncing to happen, or the device needing to be powered up. Controlling several machines in tandem also sucks; Synergy is a somewhat acceptable solution to doing this, but it’s one more piece of software to run, update, configure, and potentially troubleshoot. I wrote something on this in 2008 if you want to cringe. Sometimes, you just want to fire up a machine, open a file you’ve been working on from your desktop and put in some work, without thinking about it. That’s where having one, compact computer with a 10+ hour battery life is very appealing.

It’s obvious that the Macbook Air has limitations, and getting past those limitations will invariably require more hardware. I was completely kidding myself when I thought I would be making savings by going to a less powerful machine, and I’ve got several Us of rackspace to show for it. The real challenge isn’t really saving money, it’s integrating the hardware in a way which sticks to this idea of a single terminal to rule them all. Steam’s addition of In-Home streaming functionality is great for providing gaming in this way. Here’s my experience with getting the thing to run correctly.

My setup

As with any gaming setup, the sky is the limit as to what you build to use In-Home Streaming. My setup is relatively humble, a far cry from the flashy (and noisy, and expensive) rigs I was building before. Here are the specs.

Case: Cooler Master Elite 110
Power Supply: Corsair Pro Series AX650
Motherboard: Asrock H81-M ITX
CPU: Intel Core i7-4790K
Memory: Crucial Ballistix Sport 16GB
GPU: Zotax GTX 960 2G

This setup lives in a closet along with my other networking stuff, with only a network cable and a power cable going to it.

Common problems and fixes

The Steam KB covers setup and really common problems well, so I’m not going to bother sharing how to get the thing to work. What I will cover though are the problems that I have personally ran into. Most of them are related to running a machine without any input or output peripherals. Depending on how games manage mouse and keyboard output as well as video output, they may not like not having a physical inputs plugged in, or not being passed on a screen resolution by the OS because the screen is absent.

The most obvious need is for an emulated display. Being a long-time contributor to the Folding@Home project, I was well aware that nothing that uses a GPU would work without either a monitor or a dummy plug. The old-school way of doing this is using the DVI-to-VGA plugs that usually come with graphics cards along with some resistors to cook up a quick and dirty dummy plug; the folding community has used those for years, and they are known to just work. While the internet specifies 68 to 75 ohm resistors to get this working, I’ve done mine with 125 ohm resistors to the same effect. The new-school way would be to use an HDMI dummy plug, which has the additional advantage of emulating an audio output, which eliminates the need for more dongles and hacks.

Audio also needs to have a simulated output, because most audio chipsets theses days deactivate if no headphones or speakers are plugged into them. While I’m sure we all have spare sets of broken headphones around, neat freaks will likely want a cleaner solution. Most motherboard 3.5mm audio jacks have physical switches inside, so all you need is to physically stuff either a dummy plug or an actual, unwired 3.5mm plug and onboard sound will work. I’ve personally run into games which crashes and explicitly gave an audio related message. Some games might work, but you’re better safe than sorry.

Finally, mouse and keyboard input is also need locally for remote inputs to work properly. I’ve played through half of AC4: Blag Flag without the mouse’s scroll-wheel because using it would cause the game to infinitely scroll. This is reported to happen in several games by several editors on the Steam forums. Here again, having a mouse plugged in is an easy solution, but not quite as clean as one would want it to be. My solution was to use a Logitech Unifying adaptor, without the wireless mouse or keyboard connected, as it registers as both mouse and keyboard HID devices. It’s also very low-profile, which is perfect for this application. I’ve been looking around to find a way of emulating HID devices via software, but I have not yet found anything worth mentioning. From my time in big box stores, I knew for a fact that my local Geek Squad  would have, for no discernable reason, a buttload of orphaned receivers; that’s where I got mine. See with your local store if you don’t have a receiver handy. Otherwise, the receivers sell for about 10$ on Amazon.

Once all the inputs and outputs are taken care of, you should have minimized the possibility of games doing weird things on start-up. Then comes the optimization of the encoding for maximum performance and picture quality. From the get-go Mac users who have a Nvidia GPU on the backend are SOL when it comes to hardware-accelerated encoding, because of a known incompatibility. The performance is great, but you game will suffer from occasional color spots and artefacts, especially in dark scenes.

Examples of artefacts and other problems caused by Nvidia encoding when Apple hardware receives it.

Examples of artefacts and other problems caused by Nvidia encoding when Apple hardware receives it.

For my setup, I found that simply deactivating hardware encoding altogether makes for the most beautiful and smoothest gameplay. Intel iGPU mights seem like a good idea, because it offloads to hardware thanks to Intel Quick Sync Video, but I found it to be lacking, with the video output being quite laggy despite framerates staying high. I’m not sure that’s what Quick Sync Video was meant for; I’d stay away from it. If you have a modern CPU, chances are your bottleneck isn’t the CPU anyhow, so software encoding will probably do little to your experience unless you’re running a very CPU-intensive title. Steam forum posts seem to mirror my experience.

Caveats and important considerations

Despite the fact that this system works great for me, there are some things to be aware of. It works for me because I’m a filthy casual, albeit a very tech-enthousiast filthy casual. I only build a computer to play games when I’ve amassed too many from Steam Summer Sales to not play them. I’ve been gaming on wireless peripherals for a few years now, and I’m not going back. I don’t care too much about latencies, refresh rates and the like. If you’re a hardcore gamer, you probably hate the idea of streaming a game, because MUH LATENCY. If you’re a filthy casual of the non-IT aficionado variety, I’m surprised you’re still reading this. This brings me to what is probably the biggest caveat of In-Home Streaming: I feel like it doesn’t fit too many use cases yet, for a variety of reasons.

Firstly, you need fast networking. I’m talking wired, ideally gigabit if you want your games to run smoothly. Since most normie non-gamers’ idea of fast networking is likely router with “Wireless AC” written on it, it’s probably not gonna work for most folks. The convenience of wifi has long overtaken good old cables, and it’s a hard sell to ask users to drop the convenience and run cables.

Secondly, it’s not like gaming in front an actual display in terms of imagine quality. The video stream coming from the rendering side of the setup is compressed, and those of you with keen eyes will notice this in certain settings, especially if you’re running on a non-gigabit connection.

Thirdly, it’s still not seamless. Most games will work fine, even the ones that require a launcher or aren’t from Steam, but some titles inexplicably require some user intervention via RDP to get things to work. For example, after launching Watch Dogs, other Steam-enabled Ubisoft titles will no longer launch some times, requiring a restart of Uplay. Sticking to the same title is fine, but playing different ones will inevitable at some point cause hiccups which will require user intervention. It’s never something major, but it’s still a pain in the butt.

Finally, there is some of the dreaded additional latency, on input and display. You won’t be winning a CS:GO world championship on a stream of a remote computer. It’s only really noticeable on faster shooters, but it’s there, by the very nature of how it works. As video codecs get better and 10BaseT becomes ubiquitous, we’ll see this concern fading away, but for now we have to deal with it.


To sum it all up, while I don’t think the technology is quite mainstream ready, Steam’s In-Home Streaming is a step in what I see as the way of the future, that is decoupling a computing/gaming experience from the hardware required to provide it. Let’s face it, if you’re building your own computer, you probably can handle the additional legwork of configuring and maintaining a streaming box. If on top of that you don’t mind making minor concessions on performance for the sake praticality, I’d strongly recommend you look into it. For my, having a totally silent work environment alone is well worth it.

In part 2, I’ll go over what I’d like my next Steaming box to look like, and what steps I intend to take to integrate In-Home Streaming functionality to my existing server setup.