Storj, The “Sharing” Economy and Why It Matters

March 18, 2017 § Leave a comment

The sharing economy is a buzzword which has gained enormous traction despite the fact that it remains misunderstood, in my opinion. The luddite politicians who have been vocal against it on the grounds that it’s a dishonest loophole for escaping regulatory frameworks have it obviously wrong, but I feel that even enthusiastic participants and users probably haven’t considered all it’s implications. It’s not just about creating internet marketplaces for everything. The true genius of the sharing economy is twofold.

First, it creates producers out of what would have been previously considered ordinary consumers. This in itself is an incredible thing which holds the promise of economic emancipation for a segment of the population who were previously unable to jump over the barrier to entry and start making money out of their property. Ordinary folks traditionally have had to buy productive goods and watch them depreciate before their eyes while they made limited use of it. The car is probably the best example of this: it’s an expensive piece of machinery that people pay to own, pay to use, pay to upkeep and often pay to store. It’s an expense, not an investment, because it simply can’t generate enough value in the hands of a single individual. Uber and Turo have changed this, and though a website have transformed thousands of cars from consumer goods to capital goods. In the Marxist sense, people now literally own means of productions.

Secondly, and this is closely related to the first point, the sharing economy enables optimal use of resources, which produces economic efficiency. Because economic efficiency means extracting every last bit of value out of every resource, be it goods like cars or consumables like energy, it should be championed. It’s good for the economy, good for the environment, and by lowering the cost of certain goods and services, it’s also a social good. To carry on with the transportation example, it appears self-evident that there are more than enough road-going vehicles currently in North-American to satisfy the entire aggregate demand for transportation, if only we had a means of extracting all this dormant value. The sharing economy is this means.

Information technology has allowed the sharing economy, but it could also greatly profit from it. The turn to cloud-based server infrastructure is great, because it allows for cost-efficient and resilient systems that are easy to scale out, but the true cloud hasn’t yet been achieved. The “cloud” still lives in servers and cabinets which are centralized in server rooms and datacenters, and no amount of local redundancy will rid this model of it’s vulnerabilities to downtime. In essence, it’s really just somebody elses computer(s). Like it or not, AWS still suffers from outages. On the efficiency side of things, having millions of customers on the same ultra-robust infrastructure and enabling nearly automated scale-out makes sense, but in the aggregate, people who want or need their own hardware will continue to build for peak load. This means that a lot of compute, storage and network capacity is wasted while servers wait idly for sufficient solicitation. That’s for servers, but imagine the compute lost in ordinary desktop computers that are always powered on across the world, only to be used to browse Facebook a couple of hours a week.

The next step in achieving in achieving a connected world is decentralizing compute and storage, making the internet truly everything-proof short of a nuclear apocalypse. Here again, as with the transportation problem, it can easily be imagined that this utopia could be achieve if the available compute and storage resources lying dormant could be utilized fully. Turns out, that’s already in the works, at least with storage: enter Storj.

The Storj project uses blockchain technology to build a network of storage nodes which store data in a way that makes it fast, resilient and efficient. Users with spare storage and bandwidth can assign a certain quantity of hard drive space to be shared on the network, which is advertised via DHT, the same technology that enables trackerless bittorrent downloads, and loaned out through smart contracts to people or businesses who need their files hosted. The protocol manages redundancy, and has provisions for determining which blocs (or shards) go where based on parameters like availability, and speed of the hosting node. The network has been shown to be pretty fast, to the point where you can stream media from it. Based on how much data you make available on the network, how much discrete contracts your node services and how well it does it, you get paid in a special crypto token that can be redeemed for other crypto and fiat. At scale, this token can also be used to pay for services, giving it real world value which makes it an interesting digital asset to hold.

The first use case that comes to mind is a Dropbox-like service with lower cost per gigabyte and greater fault-tolerance (and possibly greater speed), but this is not what the Storj team is pursuing, choosing instead to focus on being a backend for other app builders and service providers. This is a smart choice, because the technology opens up incredible opportunities in networks that already exist, but rely on a more traditional server-client architecture. For one, anything that relies on a content distribution network (CDN) could benefit from this technology, in reducing the need for robust server infrastructure to serve content directly, giving redundant access to storage, and possibly reducing the cost of actually getting data from point A to point B. Consider an IP-based TV service provider of the traditional kind, which still puts receivers in it’s customers’ homes. With private Storj network, it could leverage the thousands of unfilled hard drives in it’s receivers to store on-demand content, radically decreasing it’s need for servers and stress on the main trunks of it’s infrastructure. If two neighbourhoods’ worth of receiver hard drives can store all of it’s on-demand content, why overbuild the link from theses two neighborhood to it’s datacenters hundreds of kilometers away? This way of doing things essentially spreads out the load on storage and network over a wider area, making a more efficient use of what resources are already in it.

There have been other tries at distributed storage before. I’ve been involved in using Resilio, formerly known as BitTorrent Sync, in a fairly large production environment to transfer content, and it worked pretty well, but was intended to sync between your own nodes, not necessarily the network at large, making it less interesting in terms of it’s ability to spread the load. Syncthing offers more or the less the same feature-set as Resilio, is open-souce and cross-platform, but suffers the same drawback of being aimed at providing file-sharing between nodes you operate yourself. None of these solutions do anything to utilize dormant resources, let alone remunerate you for contributing to the network. To put it otherwise, existing solutions are still within the traditional “cloud” paradigm of specialized machines doing particular jobs, wheres Storj goes way beyond that into the distributed cloud.

If you have spare storage, and chances are that if you live in the developed world you most probably do, I strongly suggest that you encourage the project and download their app and start sharing. The project currently has several petabytes (a million gigbytes) of available storage, making it probably the biggest experiment in distributed storage to ever have existed. With important commercial agreements having recently kicked in, the product being out of beta, and paid access billing systems now fully functional, the network is surely going to see a spike of activity. You’ll be making money off storage you otherwise wouldn’t use, but you’d also be contributing in technology that will surely power power the internet of tomorrow. The sharing page has everything you need to become a data farmer.

Projets like these show what the sharing economy is really about. It’s not really about sharing in the altruistic sense, it’s about using technology to break down barriers to entry and making markets work better, decentralizing all the things, and empowering consumers by promoting them to the status of producer.

In Defense of Political Radicals

January 11, 2017 § Leave a comment

Sometimes, life imitates shitposting. Under the guise of conducting research, I’ve come to join and like an absolutely inordinate amount of politically-inclined shitposting groups and pages. It’s bad, to the point where normie posts about meals, trips and life achievements have been completely drowned out in political compass memes and ancap smileys. Right around the time where radical centrist memes began making an appearance in my favourite meme group, I became aware of a piece by Hunter Maats which perfectly represents everything that’s wrong behind the idea that inspired the meme.

This is what Hunter looks like. See the original tweet in context.

This is what Hunter Maats looks like. See the original tweet in context.

In his extremely verbose article, Maats perform an intricate kata of mixed mental arts, chopping down conveniently constructed straw men built up in the shape of Tom Woods, intended to be a sit-in for the entire ideology of anarcho-capitalism. Between ad-homs (intended as bait?) and masturbating about how widely he has read, the author does a pretty shitty job of explaining to us why and how anarcho-capitalism sucks, relying mostly on choice arguments such as MUH HUMAN NATURE. I’m not going to white-knight for Tom, simply because he doesn’t need my help. I’m not going to refute the piece too, because the concerns Hunter has with anarcho-capitalism have been addressed before. Maybe he hasn’t read widely enough. What I am going to do is point out how the underpinning of Maat’s “arguments” are essentially the same as those upon which the states builds it’s own perverse logic, in that, hopefully make a sound defense of political radicals, a group in which I include libertarians.

What aggravates libertarians more than anything else about the state is its hubris: how a small group of elected officials, upon manning the controls of apparatus of the state, suddenly come to think of themselves as gods, who with sufficient time and budgetary appropriations can solve everything. Functionaries of the states and their elected political overlords have different interests, but the latter coincide when it comes to making the state look like an all-knowning entity with the power to fix everything. We can make the Middle East democratic. We can save people from their self-destructive habits through nudging and subsidies. We can engineer our way into permanent economic growth. We can provide everything for everyone forever.

This hubris of the state is also that of the technocrat, and apparently, Hunter Maats. While reading his piece, I got the impression that Maats’ point was that if only everybody read as much New York Times best sellers as he did, we could finally move towards an improved society; if only everybody accepted consensus opinions on subjects like “human nature”, we’d have a solid foundation upon which we can build a better society. In doing this, Hunter commits what should amount to a cardinal sin to anybody who believes in science: he behaves as if he had found pure, eternal, objective truth between the covers of paperbacks. His pretension to be able to build politics upon a solid foundation is nonsensical:  firstly because it is epistemically unsound, and secondly because understanding politics as science is absurd*.

Modernity was built on doubt; such a well read person as Maats aught to know this. Descartes methodological doubt constitutes the basis for modern science, and thinkers like Karl Popper who have built upon it have shed light upon one of the foremost criteria of truth: falsifiability. In the face of science, the  soundest of knowledge is to be questioned, all fact is provisional by definition. Men and women of science aught to be humbled by this sword of Damocles inherent to the process of discovering and rediscovering truth. Knowledge is a shaky edifice by virtue of the fact that it’s building-blocks are subject to crumble at any time. When he makes cocksure pronouncements on the existence of an essence of man or processes of evolution on the scale of universe, what Maats is doing isn’t science; he’s simultaneously praying at the altar of science and desecrating it with his arrogance.

Ironically, the Austrian school of economics, the works of which Maats refers to as mental masturbation, does a much better job at being humble in it’s view of the science of economics. To be sure, the Austrian methodology could be critiqued, but its insistance that economic preferences are subjective and largely impenetrable are a sign of committal to the idea that some things simply can’t be perfectly revealed through science.

So, one could say that Maat’s refutation of anarcho-capitalism in favour of governance based on science is significantly weakened by the fact that he fails to acknowledge that every bit of knowledge he builds his case on could be proven to be completely false. What completely demolishes his case against Woods / libertarians / ancaps is his complete lack of understanding of what this person and groups are about fundamentally.

The libertarian mouvement is a political one. This implies that it interacts with other mouvements, actors and institutions that are contemporary to us; it attempts to influence the status quo in a manner coherent with it’s founding principles. Hunter portraits the movement as striving for the establishment, ex-nihilo, of a stateless society; this is of course ridiculous, because politics is by it’s very nature a process. How did we get here, politically? According to historians and scholars on nationalism like Benedict Anderson, the state was formed around the state-sponsorship of an official culture through things like language and education, progressively asserting itself as a incarnation of the collective will which bore much more legitimacy than that of divine-right monarchs. Democracy is a relatively recent ideal which best channels this ideal of legitimacy, and which has been proven to be robust, if imperfect. There is no pivotal moment where our current institutions magically appeared. There is no teleological “march of progress” leading up to democracy and the modern state. What we have before us is the results of human action, the slow process of humans shaping their environments through interacting with one another.

We libertarians, the political ones at least, wish to participate in this process for the betterment for our society. We are well aware of our institutional surroundings, as demonstrated by our eagerness to criticize the state, its innate immorality and spectacular shortcomings. I know of nobody who wishes to achieve the libertarian ideals while ridding ourselves of what good things we have invented collectively (the idea of courts and arbitration of disputes, social institutions, and even *GASP* roads!), but this does not mean we do not carry a utopian vision of what we would want society to look like. This utopia is a guiding light, in the same way the sun, moon and stars can be used for navigation despite being out of reach. If politics is constantly in movement and evolution, this ideal is the direction of our vector, not a mere point.

In a sense, Maats is attempting to discredit an inherently political concept, the ideology of libertarianism, in strictly apolitical terms. In his article and follow-up tweet to me, he makes it sound like being political is a bad thing, and that science alone can yield a better future. Is he forgetting that the Kants and Rousseaus that he likes to namedrop where also guided by an ideal, and imminent political? Is he denying that their ideas had an influence on the foundation of our societies, despite their respective utopias never having been achieved? In citing Diamond, Pinker, et al as “evidence” (his words) of some sort of human nature and transcendant process towards progress, he discards the role of human action in the shaping or our world. The thinkers, the people that their ideas sway, the practitioners of politics and the masses they mobilized, the groups that coalesced around this or that aim or objective, they are the real force behind the creation of our societal order. The very idea of the contemporary nation-state that is so dear to Hunter was borne of an assembly of myths, from the social contract to the “collective will”. What is to say that ideas can’t change the current order of things?

In his attempt to evacuate politics in order to assume the stance of the “realist” (again, his words), Maats dons the robes of the soul-less technocrat, eager to engineer a better future but all too quick to forget the human beings that will inhabit it. Like the state, he adopts a quasi-religious belief in the “just a bit more effort” mentality and carefully plans a map to the exact location of what turns out to be a mirage of brighter tomorrow. In the end, the only place he’s bound to end up after much adjustment, readjustment and sensible choices is further down the road to serfdom. The real enemies of human progress aren’t on the ideologues on the left or the right, who dare imagine a different world. The real threat, if one indeed does exist, are radical centrists like Maats, who look to smother the act of politics in what they perceive to be truth.

Radicals, keep doing what you do, your ideas and actions are the engine of history. Hunter, once you’re done beating off to best-sellers, you can come out and play too.

 


*This claim might sound weird coming from a political scientist. In the french language, there exists a distinction between la politique and le politique. The first refers to the conduct of politics, the interaction between political actors (campaigning, coalition forming, media ops, etc), while the second refers to the abstract notion of the affairs of the state and governance. Here, I am referring to la politique. To be sure, my discipline, in so far as it concerns the analysis of public policy, voter behaviour and scholarly analysis of political situations is absolutely a science. When discussing ideology however, this is not what we’re talking about.

Home Network Segmentation, NAT Loopback to VLAN on Ubiquiti Unifi Gear

November 4, 2016 § Leave a comment

I’ve been putting off segmenting my network for a while now, but the recent IoT botnet powered DDoS has bumped the task up my list of priorities, and I finally got around to doing it. Generally, if your network is anything other than non-critical clients accessing the internet, that is to say if you have any sort of IoT devices or it you host any internet-facing services at home, it’s probably a smart thing to split up your network into segments. Doing so allows finer-grained control over which machines can talk to each other, thus enhancing security. A segmented network is usually also easier to survey and audit, because irregularities like “why the hell is there an Acer laptop in my server segment?” stand out more, and with the appropriate monitoring solutions you can more easily generate usage stats by just running queries for an entire segment.

Let’s use the IoT botnet situation as a practical example. If your chinese-made, off-brand IP camera system was rooted, we now know that it has the potential to take part in taking down a third of the internet. But being that a third party has complete control over the device, there’s no telling what what it could do, including spread to other machines on your network, steal some files off an improperly secured network share, etc. Not being able to shitpost to Facebook is one thing, but messing around with my self-hosted stuff and data is where I draw the line.

There exists several models for how to segment networks, but this is one of those things that is really an exercise in common sense. A military buddy once evoked a three tier system that’s the standard for military IP networks; that’s probably great for their use case, but your home isn’t a forward operating base in the horn of Africa, and the military probably doesn’t have the likes of Nests and Augusts in their networks. Thanks to the magic of VLANs, you aren’t bound to a specific number of segments, and you’re probably better off doing more than not enough, within reason. Here’s what my setup looks like. 

homenet

It’s a pretty simple setup, but since the pictures are not really representative, here’s some more explanation.

General access: This is the untagged network that anybody who connects via wifi or wired connection gets assigned to without further configuration. This is the least trusted segment, in that it has limited access to the others, but it’s also protected from the other segments.

Web-facing services: Self-explanatory. This is completely isolated from all the other segments, both in and out, because clients who normally just connect through WAN. There is an exception for local clients trying to connect from General access and Network managment segments, which get NAT’d back in, but those exceptions are made for application specific ports only and everything else can be firewalled out. For Unifi gear, everything needs to talk back to the controller for your network to work, and if that controller is local, you’ll need to punch a few holes in your firewall. More on this later.

Network management: This is for the administration interfaces of switches, routers and access points, in addition to IPMI and other remote KVM options. In a Unifi environment, nobody should be talking to the APs or the switch, so can normally be isolation. A per-host exceptions can be made if you want to access those devices, most likely an IPMI console, with your main machine. Seeing that IPMI practically gives the next best thing to physical access to your systems and has been proven to be a wide-open opportunity for attack, it stands to reason to reason that this segment should be as strongly sealed up through your firewall policy. Note that for now, Unifi APs do not support being managed on a tagged VLAN, although the feature is supposed to be in the works. My own Unifi switch did not have this problem with the latest firmware.

Home Automation: Since home automation products usually only talk to the internet anyways, this is another locked-down segment that can’t speak to anybody. Where necessary, an appliance that needs access to more “commercial-grade” products that actually allow interaction from the LAN can be assigned an address within the VLAN. While it’s not deployed right now, a virtualized machine running Home Assistant would be interesting.


Building a segmented network with a Unifi gateway as your router is a bit different from what could be done on other platforms, since the incomplete GUI controls don’t offer all the options necessary to fine-tuning your setup. The major annoyance is that NAT loopback (aka hairpin or reflection) doesn’t seem to be properly implemented. Port-forwarding via the USG configuration menu works when accessing from the internet, but loopback config seems to assume that you will only be forwarding ports to a single subnet, and hence only need loopback to and from this subnet. We’ll need to fix this.

Unifi USG port forward dialog. Simple enough.

Unifi USG port forward dialog. Simple enough.

I found the fix for this while reading a date how-to for EdgeMAX products. Once you’ve configured your port forwards, you’ll have to manually setup NAT masquerading to the hosts that will be receiving the forwards for loopback to function correctly. There are no means of settings these things up in the GUI, so you’ll have to use the old config.gateway.json trick to manually input some NAT rules.

I strongly recommend learning how to program Edgemax / Unifi gear, because it’s great help in understanding how to modify the config file, but in case you’re in a hurry, here’s a sample of the JSON you’ll need to add.

 
{
     "service": {
          "nat": {
               "rule": {
                    "7001": {
                         "description": "Hairpin NAT Transmission",
                         "destination": {
                               "address": "10.1.2.3",
                               "port": "9091"
                         },
                         "log": "disable",
                         "outbound-interface": "eth1.121",
                         "protocol": "tcp",
                         "source": {
                               "address": "10.1.0.0/24"
                         },
                        "type": "masquerade"
                    }
               }
          }
     }
}

Replace the description, destination address and port, source address range, outbound interface (mind the VLAN!) and you are good to go. You will need one rule for every host. This JSON was built assuming you have nothing in your on-controller config file, do respect the hierarchy if you have other things going on in there, and ALWAYS validate your code.

You’ll need a rule for every port forward you want accessible from your LAN. It might be possible to define those in one shot by defining several destination addresses and ports, but I have’t tested this.

In the event that the destination port on your forward is not identical to the incoming port, you’ll want to configure the inside port in this masquerade for it to work. Be careful, loopback traffic is not exempt from passing through your LAN-IN firewall rules, so you’ll need to configure exemptions to let that through.

This is what my LAN-IN firewall config looks like. Note the exemptions: 1) for NAT loopbacks, 2) for the switch on the management segment, which still needs to talk to the controller which has not yet been migrated from my general access segment. Rule 2003 is disabled for the same reason.

This is what my LAN-IN firewall config looks like. Note the exemptions: 1) for NAT loopbacks, 2) for packets coming back on active connections to services available through loopback connections and 3) for the switch on the management segment, which still needs to talk to the controller which has not yet been migrated from my general access segment. Rule 2004 is disabled for the same reason.

The content of Web-Service-Loopbacks group. Note that I used hosts, not the entire segment, which would negate my isolation rules.

The content of Web-Service-Loopbacks group. Note that I used hosts, not the entire segment, which would negate my isolation rules.

That’s it! While you’re doing the legwork of editing the JSON file, you might want to do things like disable SSH password authentication…

 
{
     "service": {
          "ssh": {
               "disable-password-authentication": "''"
          }
     }
}

…and configure RSA login.

 
{
     "system": {
          "login": {
               "user": {
                    "admin": {
                         "authentication": {
                               "public-keys": {
                                    "user@hostname": {
                                         "key": "aaaaabbbbbcccccddddd",
                                         "type": "ssh-rsa"
                                    }
                               }
                         }
                    }
               }
          }
     }
}

That’s a pretty good start to a robust and secure home network. Let me know if I missed anything.

The Internet of Things Needs Integrators, Not Gadgets

October 18, 2016 § 1 Comment

Crowd-funding and the sharp drop in development costs for internet connected device has given us both an onslaught of useless gadgets, and the much overused buzzword “Internet of Things”, or IoT. The concept has been pushed way past the borders of absurdity, as highlighted by the likes of @InternetofShit. I’m only half surprised: personal computing has plateaued, with many of the “traditional” challenges posed by hardware such as transistor and storage density being more of less solved, what more is there to innovate in? To break the “sit down in front of a screen” paradigm of computing, the industry is shooting in all directions to find new ground. This spray-and-pray approach to innovation is surely finding new ways to do things, but in the frenzy of it, we are offered truly senseless gadgets, from cloud-connected candles to bluetooth kegel exercisers.

All these gadgets are of course intended to be mass-marketed to the public at large, people who are in the aggregate are almost completely technologically illiterate. I’d bet that most of the time, they are used with whatever app they are shipped with, and seldom integrated into other systems through platforms like IFTTT. Because they have to just work even for the least tech savvy, several shortcuts are taken in the design of these gadgets: more often than not they are battery powered, and work in a way which handicaps their usability for more advanced user. They are almost never meant to have a lifetime that exceeds a few years, at most. Almost all the time, they are dependant on some online service running in the cloud, which is susceptible to be hacked, taken down due to an outage, or just plain shut down when the gadget-maker inevitably goes bankrupt or disappears in the fog like so many other tech companies.

All the while this is happening, people buying the gadgets with the intent of having technology make their lives easier are connecting all those devices to 50$ routers which are generally both unreliable and insecure. They pipe enormous amounts of data, some of it sensitive in nature, to servers hosted who knows where. Home automation is on everybody’s lips, but houses are still being built with a basic run of RG6 and a few phone lines as the only data cables present within their walls. Technology as a whole is ubiquitous, but no one technology has gained the traction necessary to signal a significant change; one might have a smart lock and the other a few smart lights, but only the biggest nerds have integrated these things into systems which ACTUALLY make day-to-day living easier or more enjoyable. Absolutely none of these piece-meal IoT innovations has had the impact on the population at large of, say, the electric refrigerator or the good old personal computer. I believe the IoT will remain but a buzzword until the industry leaps over this hurdle and brings forth something that becomes as widespread in it’s adoption.

I feel that home automation is the place where the IoT really has the potential to shine. It’s completely abhorrent that technology we use to interface with the functionalities of our living spaces have been largely unchanged for the past century. With the current concern for making dwellings more energy efficient, I think the time is ripe for more widespread adoption, but certain things have to change. I believe that for IoT to gain traction, particularly in the home automation sphere, certain things need to change; some things need to stop, and others need to start happening more often.

STOP MAKING YOUR DEVICE RELIANT ON THE CLOUD. Off-loading compute and management to the cloud is a smart thing to do, for all sorts of reasons which I won’t bother listing. Most users don’t mind sending massive amounts of data to the cloud, and those that do are generally the type of people to homebrew their own automation solutions, so privacy concerns are not a huge deal. Where relying on the cloud gets annoying is when it, or your access to it, fails. The PetNet fiasco is a good example of this. It’s horribly bad design to rely on inputs from the internet to have things work, and even in 2016, nobody should ever be assumed to have a 100% uptime internet connection. Ubiquiti has this figured out for their wireless access points. They employ a provisioning process that stores configuration locally on the devices, expects regular pingbacks from them, but works just fine when the controller is absent for whatever reason, even when the units get powered down. I’ve had access point setups run without a controller for MONTHS without affecting the product’s foremost functionality, providing wifi.

START GIVING THE OPTION OF NOT USING THE CLOUD AT ALL. In a perfect world, everything would be open source and I could run a copy of Nests backend on my servers if I wanted to. In this imperfect world which is ours, this is not an option. That doesn’t mean that users shouldn’t be given the choice to opt-out of the cloud entirely, and use the array of sensors onboard the device in different ways. Perhaps this is too much of the Apple-championned “protected ecosystem” idea permeating the tech sector at large, but making cloud-management a non-negociable part of the product is surely preventing a lot of these IoT companies from moving units.

Sure, there’s an API for everything, but the fact that I have to call a server on the internet to interact with my thermostat is totally non-sensical. That’s not to mention that API access usually can’t interact with the sensors directly. I’d love to be able to save myself a PIR install and use the presence sensor on my Nest products directly, but sadly, that’s not an option. How hard can it be to have some way of querying devices, via network or otherwise, to use the raw input they can provide?

It’s a matter of time before we start seeing people have their expensive gadgets turn to to paperweights because the company providing the backend goes tits up. By enabling direct-to-device interaction, we can avoid this.

STOP MAKING EVERYTHING WIRELESS. I’m talking for both power and data. While the evolution of ICs and battery tech has us changing batteries less and less often, it’s still an ENORMOUS pain in the ass to change batteries. There’s a reason why fire departments literally have to go door to door to remind people to change the batteries in their smoke alarms. Power redundancy: YES! Battery-only power that makes me buy obscure-sized coin cells every 18 months: HELL NO!

Wireless for data is obviously a go-to for IoT companies because it removes an important barrier to entry for the consumers, who can just takes stuff out of the box, plug it in and enjoy. This is fine if you used RGB lighting as a party trick to impress guests in your living room, but it’s very annoying if you have stuff throughout the house that need to talk to Zigbee-Wifi interface boxes. Soon, you find yourself with Zigbee relays plugged in everywhere, which is an eyesore. I can’t really comment on the security and interference implications of automating an entire house on wireless, but generally, it seems that using whatever wireless protocol is just a lazy workaround to using cables, which are more secure, resilient and effective at transmitting both power and data.

We live in a time where 4-pair network cable can transmit incredible amounts of data all the while powering a small TV, and the IEEE keeps developing on this technology. Start thinking about how you can leverage this.

START EMBRACING PERMANENT INSTALLS AND LONG LIFE CYCLES. People buy homes for years, decades. If you intend to make a product which makes this home better, it should have have a life-cycle that makes sense on the scale of that which it’s going in. Support your products for long enough to make permanent installs a likely consideration, and find new ways to monetize a longer relationship with your customer-base. By all means, innovate and disrupt and do all those things that you startup nerds are up to, but don’t forget the people who bought your product because it solves one of their problems in favour of the fad followers who react strongly to hyped-up release announcements. Think cross-compatibility (of brackets, connectors, wiring) and upgradability (of software, hardware components where applicable). A product that compliments a home shouldn’t be sold like a phone or a smartwatch.

This is wishful thinking, because honestly I don’t think anybody is in the business of making durable goods anymore. It’s a shame, especially considering the recurrent revenu potential of IoT devices in the form of services that compliment the device.

START SUGGESTING HALF-DECENT NETWORKS TO YOUR CUSTOMERS. Defence wonks how work on military drones believe they should be used as a tactical tool for strategic ends, and laments how politicians just use it as a cool toy whenever convenient. I think IoT has the same problem: IoT devices should be considered part of a larger infrastructure, and designed to fit in a well-built core network, not standalone solutions.

This goes back to “stop making all the things wireless” point of course, but I’d really want to see the industry push for a wholistic approach to making your home smart, starting with the basics: a decent routeur, correctly configured and layed-out access points, a hardwired network which is well integrated to the building and a proper rackmount cabinet where everything terminates cleanly. Some will say I’m a gear slut, and to that I will plead guilty as charged, but hear me out. I’m not asking for a 42U cabinet in every home. I’m asking for a semi-standardized way of laying out the vital components of a smart home which makes upkeep easier.

Networking equipment aught to be suggested as an option to owners building new homes; so much of our lives is spent interacting with internet-connected devices, it’s completely ridiculous to still have half-assed retro-fitted solutions providing this vital connectivity.

This will be a hard transition to make because of resistance from both consumers and IoT manufacturers, but in the long run, developing infrastructure within homes onto which companies can develop will assuredly enable additional innovation in automation technologies. Tech people, make a consortium or whatever to push this agenda, and you’ll get to open up new avenues for your business in addition to creating new positions with cool names that you can put on your business cards.

START CHAMPIONING THE EXPERTISE OF INTEGRATORS. This ties in closely with all previous points. No amount of ease of installation and 3rd party integration capabilities will replace well-thought out integration of automation solutions into people’s dwellings. No matter how beautifully designed your product is, ideally it should be (in most cases) entirely hidden and require as little user intervention as possible. People want things that JUST WORK, and I’d argue that many times they are willing to pay people to get their stuff setup to just work.

Entrepreneurs are going to have start offering those services, but there are things that manufacturers can start doing to help a network of home automation integrators appear. Define installation norms, collaborate with your industry peers all the while doing it, and start working on certification and installation referral programs to legitimize the people that are willing to do the legwork in showing the masses what your products can do. Nest has this kind of program, and it’s an honest attempt at getting something like this going. In the end, integrators who know what’s out there and what can be done have the potential to generate sales and create loyal customers. Additionally, they provide priceless technical feedback and possibilities for large-scale beta-testing of new products in real use cases. That’s stuff you usually don’t get when interacting directly with the customer.

The takeaway of this post is that I feel that somehow, companies that bring IoT products to market in the home automation sphere need to stop trying to push units through big-box retailers and design products that are meant to be long-term investments just like air-exchangers, heat-pumps, furniture and large appliances. Give us less gadgets, and more solid solutions to actual problems. Give us less Bluetooth, more PoE ethernet, things that integrate to a house, not a phone. Don’t just offer a cloud API, offer more complete access to your devices themselves, for users which want to use your products as products, and not as services. Maybe then, we can start seeing truly smart homes.

Les «28 pages» et leurs significations pour les relations américano-saoudiennes

August 16, 2016 § Leave a comment

Il y a quelques semaines, une mystérieuse portion de la Commission jointe sur les actions de la communauté de renseignement avant et après les attaques terroristes du 11 septembre 2001 (en anglais, Joint Inquiry into Intelligence Community Actions Before and After the Terrorist Attacks of the September 11th, 2001), a été déclassifiée après deux ans de procédures de révision. Communément connues sous le nom de « 28 pages » (il y en a 29, mais le nom a resté), le document décrit les découvertes de la commission concernant des liens possible entre le Royaume de l’Arabie Saoudite et certains individus impliqués dans les attaques du 11 septembre.

Consultez l’article complet sur 45eNord.ca.

THE 28 PAGES & SAUDI-AMERICAN RELATIONS

August 1, 2016 § Leave a comment

Late last week, a mysterious portion of the Joint Inquiry into Intelligence Community Actions Before and After the Terrorist Attacks of the September 11th, 2001, which had been classified since the report’s release, was declassified following a two-year long declassification review. Know colloquially as the “28 pages” (the count is wrong but the name stuck), the document describes what the inquiry found regarding possible links between officials from the Kingdom of Saudi Arabia and individuals known to be involved in 9/11.

Read the entire article at Observatory Media.

Bypassing Bell Fibe FTTH Hub with Ubiquiti EdgeMAX Equipment

July 31, 2016 § 1 Comment

Whenever I have to interact with big telcos, I inevitably come to ask myself why they are still in business. It’s a wonder that companies that are so big and so dysfunctional on so many levels still have any customers at all. I’ve recently had to do an ISP switchover from dual Cogeco 100mbps over copper to a single Bell Fibe 250mbps line, and my experience was less than stellar. Apart from getting the usual “oh, we’re sorry, your line isn’t quite active yet on our end” not once, but twice after the install tech’s visit, their business technical support was entirely useless.

Sagemcom FAST4350

The dreaded Sagemcom FAST4350 aka Bell Hub 1000. Bell can connect to this to diagnose stuff apparently; you know what they say about back doors… Good riddance.

If you’re a Fibe customer, you probably know that Bell insists that you use their Hub 1000 or 2000 (their name for it, it’s actually a Sagemcom FAST4350 router) in your networking setup, regardless of the fact that as a business customer, you probably have something more suitable for the job. If you try to bypass it, then don’t hope to get any kind of technical support: the mindless automatons that they haven’t managed to outsource or replace with machines at their level one support will absolutely refuse to escalate your call or provide any sort of useful information. The Sagemcom has a bridge modem that can be triggered with button presses: if it’s bridging, they also won’t be supporting it. I’ve heard they have remote access to the router many times from many sources, and I would be tempted to believe it. Since I or my customer wasn’t about to let a 30$ piece of shit router with a back door be the weak link in our connectivity, we found a way to get it working with the Edgerouter we had on site, while completely bypassing the Sagemcom box.

aa

The Alcatel-Lucent ONT. This is obviously not optional.

The VLAN 35 / VLAN 36 trick is well known, albeit completely undocumented by Bell, and seems to work network-wide. Basically, traffic to the Bell ONT is divided into two VLANs, 35 for the internet traffic and 36 for the Fibe TV streams. Our use case only had an internet connection, so getting it to work was as simple as creating a VLAN interface on ethernet interface we used to connect to the Alcatel Lucent ONT, and creating a PPPOE interfaces within this VLAN. Once that’s done, enter your username and password for the PPP connection as provided by Bell, set the firewall rules for your connection and you’re good to go. The commands should look like this:

set interfaces ethernet ethX vif 35
     #creates VLAN 35 on interfaces ethX
set interfaces ethernet ethX vif 35 pppoe 0
     #creates PPPOE connection zero within the newly created VLAN
set interfaces ethernet ethX vif 35 pppoe 0 username thisisyourusername@bellnet.ca
     #self-explanatory
set interfaces ethernet ethX vif 35 pppoe 0 password thisisyourpassword
     #ditto
set interfaces ethernet ethX vif 35 pppoe 0 firewall in name WAN_IN
     #set firewall inbound firewall rules
set interfaces ethernet ethX vif 35 pppoe 0 firewall local name WAN_LOCAL
     #set firewall local rules
set interfaces ethernet ethX vif 35 pppoe 0 default-route auto
     #get your routes from your PPPOE connection

By default, your Edgerouter uses all the right options for connection to PPPOE properly, except for one. MTU was set to 1492 explicitly without my intervention, and name-server to auto also. This will allow you to connect and get an IP, however only certain parts of the internet will be accessible. I thought this had something to do with routes, but both a static interface default route and the PPPOE routes yielded the same results. Connection to Youtube and other Google sites worked great, but half the internet didn’t, with packets stopping 6 or 7 gateways down. Turns out MSS clamping is to blame, so you’ll need to set that in your firewall.

set firewall options mss-clamp mss 1412

Without this option, your packets can end up being too large and dropped by certain hops which enforce maximum MTU strictly. What’s funny is that MSS clamping is assumed if you use the wizards to define your WAN as PPPOE, but not one of the assumed options when setting up PPPOE from scratch. Since we were migrating from a load-balanced setup with DHCP being provided by modems, this was not present in our config.

Keep in mind that this will work for setups without Fibe TV only. For setups with it, you can follow the excellent instructions from here, with the added bonus that you can probably use software port bridging functionality in your Edgerouter to eliminate the need for a switch between your ONT and the Bell Hub. This is entirely untested though, so have fun at your own risk.

My primary frustration with Bell in this case, let’s say apart from having to go on-site three times to get a relatively simple job done, is that I’m pretty certain a more senior level 2 technician would have pointed to the MSS stuff in a matter of minutes; it’s just the kind of stuff you know when you work in that stuff all day. I know for a fact that there are some technically competent folks who work at or for Bell, because I’ve had great experiences in the past. I’ve assisted in some pretty hard-core roll-outs of Bell services to entreprise customers, and the techs there were present and helpful. Where they were not, a quick chat with the sales guy  could fix things very rapidly. The problem is that if you don’t have a sales guy, even if you’re a business customer, you have to navigate through the same byzantine system of call-in support as the rest of the plebs, wait a long time and hope for the best. If you’re lucky you’ll get a zealous one who’s willing to break SOP to make you happy; if you’re unlucky, you’ll get another drone who just reads the lines he’s supposed to.