found drama

get oblique

Category Archives: tech

Reviews, speculation, and other idle thoughts on hardware, software, firmware…

“a good thing being able to configure a framework”

by !undefined

The Future of the JavaScript front-end framework:

More/less the same drum getting beaten by so many right now (see also: the Joreteg’s announcement about Ampersand, although this opinion piece is probably a better illustration of the theme) – and by that I mean: “Go ahead and provide an all-inclusive framework, but be modular so I can swap out the parts I need.” Which is great, but I would say misses a big point: discoverability of those modules. Say what you will about Spring’s role in the Java eco-system, but its worthwhile to align with a single trustworthy starting point. Part of what’s missing from the “framework fatigue” discussion is just that – that our “big frameworks” only provide a relatively thin slice of what our whole app needs and we’re in “contrib” (and/or plugin, and/or mixin, etc.) hell for everything else. This glosses over the BIG-Big frameworks of a few years ago (e.g., dojo, ExtJS, YUI), but the point stands.

(A version of this previously appeared as my comment on Prismatic.)

Nash on Joint Cognitive Systems

by !undefined

Ghosts in the machines:

It feels intuitively right: computers are better at plugging through repetitive information very quickly, humans are better at context-specific, qualitative judgments about that information. But the problem with this approach is that day-to-day tasks in the world we all operate are not so neatly composable in this way — we are constantly moving between a variety of tasks that could be more easily done by either machines or humans.

Hits on a lot of good points. Implicit here (viz., not called out) is the fact that if DevOps is about increasing the humanity of designing, implementing, and managing big systems (and make no mistake: that is what it’s about) then the notion of Joint Cognitive Systems becomes very important. “Divide and compensate”, MABA-MABA approaches arguably just turn the operators into slaves of the automators automation, and potentially make things even more chaotic.

birds-eye view of JSPM and Babel for ES6

by !undefined

Building with AngularJS, JSPM, Babel, Gulp and ES6:

Unfortunately, it’s a very birds-eye-view of putting these pieces together, but it’s still worth checking out because it gives a glimpse of what it might be like. The AngularJS-specific bits offer nothing new (and frankly feel a bit out-dated), and I’m still just lukewarm on Gulp — but what’s particularly interesting is the combination of JSPM and Babel, and how that empowers developers to start using ES6-style JavaScript today.

re: Slatkin on PPK on client-side templating

by !undefined

One Big Fluke › Experimentally verified: “Why client-side templating is wrong”:

By Brett Slatkin (“One Big Fluke”).

If you followed PPK’s AngularJS post

from January (and his follow-up

client-side templating post

), then this is a worthwhile reply in its own right — and somehow I missed it when it ran.

Whereas PPK argues largely from a position of principle, Slatkin throws experiments and data at it, and while he arrives at different conclusions, he’s also gracious about the whole thing, drawing the conclusion thusly:

The take-away here is to choose the right architecture for your problem.

What’s good about this post is that he dug deep into this problem, cut it up a bunch of different ways, and remained open to the possibility that either approach might “win”. Which isn’t to say that his experiments were perfect (if some flaws aren’t obvious to you then check out the comment thread where several “improvements” are enumerated) or that there isn’t some speculation going on, but we have some good applied science here to give us a sensible picture.

Two important things here though:

First, it’s hard not to read Slatkin’s post as a refutation of PPK’s. This is unfortunate because, dogmatic though his tone may be, PPK offers up plenty of good points that you should consider when making these choices for your application. Whether you like AngularJS or not, whether you think client-side templates are a good fit for your problem or not, these thoughts are at least worth mulling over.

Which leads to my second point: it’s my sincere hope that people go with Slatkin’s real take-away, and that development teams have the discipline to look at their application, to analyze the data it’s producing about their audience etc., and to think about their goals for the future — and to design and build their applications with those data in mind. It’s entirely too easy to cargo cult decisions from posts like this one (and/or from PPK’s) and just assume that you got it right because you read something compelling from someone smart. By all means take their data into consideration, but no one else’s data is meaningful to you.

initial thoughts on Aurelia

by !undefined

Introducing Aurelia:

My first reaction is (of course) that this is why a lot of developers look at JavaScript and the web front-end see only thrash, and feel only fatigue. And in this case in particular (given Eisenberg’s previous project involvements), it smells a bit of icarian hubris.

And in that respect, you won’t find me among the early-adopters. (Taking a page from “sometimes better isn’t better if it’s different” here.)

That being said, this isn’t to suggest that there is nothing interesting happening in the Aurelia project. Count me with Tom Dale (here, and a couple of the tweets that immediately follow) in that the pluggable data-binding seems like a huge takeaway here and that we are going to see that in every framework, that you can’t have a compelling inter-op story without that. But much of the rest of what’s being pitched here (e.g., modularized code, no external dependencies, “just vanilla JavaScript”) — that’s all stuff that we’ve heard before.

(As a brief aside re: the icarian hubris remark above: let’s not lose sight of the easy-to-see flip side of that: that if anyone is going to propose a new framework, who better than someone like Eisenberg? who better than someone has been deep into other projects like this? And/but that’s also where the disappointment comes from — that it just winds up feeling like yet another technical leader decides to zig instead of zag, to go off and create a new thing instead of doing the hard work of taking something and making it better.)

Elhage re: simple software engineering lab notebooks

by !undefined

Lab Notebooking for the Software Engineer:

Oldie but goodie by Nelson Elhage at the “Made of Bugs” blog. I mostly agree with most of these points, and consistently break the rule about only keeping a chronological journal, as opposed to trying to bucket them under application-specific or domain-specific journals. (That’s a flavor of Premature Optimization and don’t try to let anyone tell you otherwise. [And right here you can expect me to insert all of the exceptions to that rule that I’ve cooked up over the years.])

One thing that should go without saying, and is a bit of advice I’ve given over and over again that dovetails well with Elhage’s post here: feel free to slow down on your “lab notebook” at any time, but the moment that you feel yourself getting scattered, go right back to a disciplined notebook. Nothing makes you slow down and focus on The Right Things quite like keeping a record — no matter how informal that record it.

A Framework for Modern User Stories

by !undefined

A framework for modern User Stories:

My inner Scrum Master got a little bit excited about this.

What I liked about it is the attention it gives to the language of a User Story — right down to the specific words that are used to make the statements. Granted, the other thing that I liked about this piece was how it both embraces the traditional User Story format, while updating/expanding it a bit to allow for more information to get “built in”.

You could argue that what’s added should have been encompassed by Acceptance Criteria anyway, but using BDD statements here seems more constructive. (…because they easily convert into acceptance tests.)

re: Peter Bell on “Innovation Debt”

by !undefined

“Just as technical debt can kill a code base by turning a green field project into a big ball of mud, innovation debt can kill an engineering team – moving them from a cutting edge crew to a group that’s barely competent to maintain a legacy app.”

Peter Bell, Innovation debt

There’s the argument that “innovation debt” is just another form of technical debt — that the latter isn’t just the tests you didn’t right, but the opportunity costs of not exploring alternatives.

What I like about this post is that, assuming you take a measured approach here, you can use a combination of these techniques to keep engineers interested and happy. You can give people the room to explore some new technologies and maybe, if those technologies actually are good choices, then by all means bring them into the stack.

That being said, it’s almost too easy to read Bell’s post and use it as a justification for your own Magpie Syndrome. A “culture of learning” is great — but recognize that your exploration can also lead to another, equally valuable bit of knowledge — the knowledge that after exploring some new piece of technology, that it’s actually not a good idea to move forward with it. Trying new things is fine — and you should explore alternatives — but sometimes your innovation is “add Java 8 streams” and not “rewrite everything in Scala”.

I also feel like it’s worth calling out (for the tech leads in the audience) that not everyone wants to do shiny new stuff all the time. (See also: Matt Asay’s thing re: “developers calling it quits on polyglot programming”.) “Innovation debt” as Bell describes it sounds like it goes a couple ways: the “explore innovative shiny new things” way, and the “double-down on what you have and make sure your engineers know their shit” way. Ignore both at your peril.

“Code that has been merged and not deployed is a loaded gun.”

by !undefined

“Code that has been merged and not deployed is a loaded gun. If I merge in my changes and don’t deploy them, and you then merge and deploy yours, you’ve just deployed mine too. This was more than you bargained for. It’s now more likely that your deployment will break something, and harder for you to fix if it does.”

Baron Schwartz, Why Deployment Freezes Don’t Prevent Outages

I prefer to think of these more as “environment freezes” than code freezes, but that’s just the name on top of the same thing. There’s a lot of smart things being said in here, and I can’t think of any points with which I disagree. A freeze lasting longer than the duration of a demo (e.g., a few hours, max) is damaging.

That being said, this extends pretty easily to any code that isn’t merged to master and deployed to at least some environment. Even without a code freeze, the longer you wait before you build and deploy, the more likely you are to experience some pain and suffering. (Hence my choice of pull quote here.)