Friday, October 3, 2014

Userless User Stories

"The Struggle" is real, says TheHackerCIO as he attempts to do "Agile" development in yet another pathological Behemoth corporation.

Central to the idea of the Agile development approach is to drive everything from "User Stories." The point of this is to get Users driving development. To get Users  needs, desires, and requirements into the forefront of developer's attention.

Supposedly, TheHackerCIO has been working in "Agile" environments for several years. But he would love to actually use Agile! Just once. Is that asking too much?

Although they claim to be agile environments, the Pathological Behemoth corporation just forces the round-peg of Waterfall into the square-hole of Agile and let's the chips fall where they may.

For instance, I've seen "User Stories" in the Product Backlog which contained references to the project phase! There are no "phases" in an Agile development. But such is the state of the industry.

The latest outrage is User Stories where there is no actual user! These user stories start off "As a developer I want to ..." and end with so much BS about what non-functional requirement needs to be attained. For example, "As a perfomance engineer, I want to make sure that cache access is less than 10 milliseconds per request."

"As a developer ..."
"As a tester ..."
"As a deployment coordinator ..."
"As a production support resource ..."

Where the hell is the user?

If anyone doubts the stupidity of this kind of approach, I have a challenge for them! Take up this "user story" from my product backlog and start a sprint on it. Hell, I'll give you two weeks. Then we can look at your code.

On this same subject I ran across another blogger who objects. I'm not surprised, since anyone sane would object. But I point my readers to it here.

I Remain,

TheHackerCIO

Thursday, October 2, 2014

Zero to One


Last night Peter Thiel plugged his new book.

It looks amazing!

The lecture was great. TheHackerCIO found Thiel in person to be as stimulating as he is in the many videos available interviewing him, such as on TechCrunch.

I highly recommend getting this book, although I've only made it through the first two chapters.

I think this is the only time I've ever recommended a book without having completed reading it. It's worth reading just for the Preface and Chapter One!

Peter defines technology as "doing more with less." In the lecture he expanded on this, pointing out that we have a big problem with our government at present, because every year it spends more money and get's the same or worse results -- for instance in Education. Which means that government (as it stands at present) is doing less with more. And this was not the case back even in the 1960s. He pointed out in the lecture that we could attack a problem like landing on the Moon back then. We could even declare war on Cancer. Today, no one can or will declare war on Alzheimers. It wouldn't poll well, and couldn't happen. We're unwilling to take on big challenges.

Peter says that "humans are distinguished from other species by our ability to work miracles. We call these miracles *technology*."

Again, get the book. Get it quick. And I'll try to post a more complete review when I've read it fully.

In the meantime, I Remain,

TheHackerCIO

Wednesday, September 10, 2014

DevTalkLA Does an About Face


DevTalkLA, that Geeky Book Club started a new one last night. Pictured above, we began discussing About Face. This is the new 4th edition of a "classic" which has been around for two decades. This latest release came on September 2nd, and includes mobile devices!

Mobile is crucial for user experience! So this promises to be a very important book.

TheHackerCIO doesn't specifically focus on User Experience (U/X) as a sole pursuit. But he has to wear every hat in the wardrobe, so adding in knowledge about U/X and IxD (Interactive Experience Design) is a welcome addition. It adds to the arsenal.

We had a lot of new blood at the Meetup. 24 people RSVPed, but of course only 12 showed, making for a 50% Unreliability Factor. It's always good to note how flaky, irresponsible, undependable and unreliable people are. It's an important index for life. In this case, 50% of respondents actually had the politeness and follow-through to keep their commitment. TheHackerCIO follows this principle, "Always call out bad behavior." You don't want that to go unremarked upon. Ideally you'd like to see it go away. Following through on your RSVP allows a Meetup organizer to get a room of the correct size. If all 24 had showed, we would have needed the larger room. It also allows the organizer to get food or refreshments for the Meetup. I mention this just to indicate that RSVPs have a *reason*. And it's only common courtesy to RSVP if you're going and stick to it.

With over 50% of attendees being first-timers, the Fearless Leader spent a good deal of time giving everyone an overview of DevTalkLA: how we've been continuously discussing technology books for 15 years or more, how we select the books (by vote), and the "bidding" system we use to determine who goes next on offering a comment/question/problem or issue to discuss.

As usual, the discussion was wonderful. I hope we see these newcomers return next week!

We began discussing the introductions, with the interesting point that designing behavior is a totally new concept. Products have been designed, but in a very static sense. The form of a product has been designed. Even the content has been designed. But with computer technology, we now have an additional element of behavior to consider and design.

We were greatly missing one of our old-timers for this book. He has to sit it out for personal reasons. He was always such a helpful addition to the group because he came it with Post-It-Tabs showing clearly every next comment he would raise, and in the absence of others having read the book, we could depend on him to drive the discussion. Since this book was a new release, I made sure that I did a similarly thorough job, so that we would have a very full and helpful discussion for the kick-off session.

On page xxii and following: I noted that there was some confusion. First they questioned whether an experience could be designed. So instead the book authors "have choosen Moggridges's term 'interaction design' to denote the kind of design this book describes." This statement in isolation isn't confusing, but the following paragraph stated that "we feel the term *user experience design* is most applicable." So the question is, WTF? Is this book going to use the term "interaction design" or "user experience design," or both? If they are using both, when will they be employed and why the need for both term? It's not crucially important, but it raised issues for me, and no-one else was able to offer any insight.

TheHackerCIO has lot's to say, but this is running too long already. So, I'll leave this ...

 To Be Continued ...

TheHackerCIO






Thursday, August 21, 2014

The Alleged Importance of Communications


On every side you hear the cries of people claiming that communication is essential for the job market. If you don't believe me, Google it. But since I always supply some indication of support for my opinions, you can look at this Forbes article, where number one on the list is "Top Notch Communication Skills."

And TheHackerCIO agrees with Forbes! Communication Skills is crucial for project and career success.

Why, then, does the title to this article include the word "alleged?"

Because I seriously question the valuation companies give to communication skills. If it were high on their list, I would not attend meeting after meeting where I strained to puzzle out every word from a strong accent. I'm not a Xenophobe, by the way. I was quite happy to have good hard workers from foreign countries on my project. In the UK, I noted & discussed with my wife how it was the foreigners who stayed with me late into the night to get a presentation deliverable done for a tough deadline. The Brits went home. The Union Shop representative left exactly at quitting time (6pm).

For you Americans, that wasn't a slip. Yes, a lot of companies in the UK are Union Shops. Their technology professionals are unionized. And they have a Union Rep, to make sure that no exploitation of the working class -- if, indeed, programmers are the working class -- takes place! And the Union is the kind of place where Stalin is still viewed rather sympathetically. I'm pretty sure it would be easy to get a stirring defense of him out of some of the denizens of the Union facility. But we've gone on a rabbit trail here. It was fun, and TheHackerCIO likes to open up people's perspectives by hitting them with conditions in other places, so it was useful. But now we need to get back to the problem of communication.

As I said, I'm happy to have foreign workers on a project & I've had very positive experience with their work-ethic. But over the years I've found that poorer and poorer communicators have come into the work force. At present, I have to ask people to repeat themselves in virtually every meeting. The other day I was distance chatting with a foreign colleague. I was amazed to hear him complain about exactly the same thing! I can understand him at the 100% level, but he can't even understand most of his fellow countrymen!

This could not have happened if companies really valued communications. The proof of what you value lies in the results you achieve and tolerate.

Outsourcing, too, is an example of a wide-spread practice which impedes communications. I could list a half dozen reasons:
*  time zone differences
* cultural differences between differing countries
* language differences & accent problems
* corporate divisions resulting in company cultural differences
* lack of non-verbal clues
* inability to "just drop in" at someone's desk

I bet, if I tried I could come up with another dozen. Again, if companies *really* valued communications ability, they would avoid outsourcing like the plague. That is not what they do. There has been a slight pull-back in the management world. But it's mostly like a stupid child pulling his hand back from the stove after being burned for the twentieth time.

Just this year I overheard one client conversation about their outsourcing effort. With hundreds of cookie-cutter deliverables coming from the off-shore team, EVERY ONE was unusable! Just like what I've seen. But what is far more remarkable, is that this client's senior management team were *from* the country where the outsourcing took place. They visited the offshore team regularly. They were in constant contact and direct personal oversight, often in-person. Yet they were unable to avoid a debacle of this nature. That's because communications is hard. And any impediment AT ALL has to be stripped out of the way.

But that's assuming that you value ... and I do mean value ... *truly* value .... communications skills.

I Remain,

TheHackerCIO

Wednesday, August 20, 2014

Stoking the Furnace

Reading about writing, lately, TheHackerCIO came across a metaphor. The writing is like a blast furnace. It takes many weeks to heat one, to get it ready for making Steel. So too, in writing, or blogging, one must prepare the mind. And the process takes time.

Stay tuned...

For TheHackerCIO is stoking up the fires again, for the coming Fall.

I've been asked questions about technology careers ...

I've been asked questions about blogging ...

I've had a new client, crazier than most ...

I've had interesting discussions about the crazy clients we shuffle back and forth between ...

And I'm particularly interested in conceptual integrity & revisiting Frederick Brooks, that author of timeless classics ...

As well as rethinking my own career and how to rethink such a thing ...

Not to mention the need to achieve what YOU want from the job you work at.

So, stay tuned for more from TheHackerCIO, coming soon ....

I Remain,

TheHackerCIO

Thursday, May 22, 2014

A Startup that Remains So is a Failure


VC funded Startups always kick out the Founders. It's regarded as a measure of success. And, frankly, sometimes they need to be kicked out. But, recently, TheHackerCIO found this presentation. It not only explains, reasonably,  *why* founders need to go. It explains, concisely and cogently, why Startups must not remain Startups. To remain a Startup, is to Fail.

It also focuses on the innovation occurring in the lean side of Startup financing. That is to say, in places like Y-Combinator, and other accelerators. A good part of that innovation is in the Entrepreneurial education provided. It points out that this is very different from traditional Business-school curricula. The author, Steve Blank, calls for the creation of Entrepreneurial Schools, or E-Schools, in contrast to Business Schools (or B-Schools). He also would like to see them connect with universities. 

I highly recommend Steve's presentation for anyone interested in Startups:




Friday, May 9, 2014

Why Hackers Hate WindoZe


The story of the font.

It's a good example of why Hackers Hate WindoZe.

Sometimes TheHackerCIO does something on a front-end Web site. In this case, I wanted to get an unusual font into the mix on a web page. Stackoverflow held the immediate answer, but when I tried it out, there was no change on the browser page.

After an hours frustration, further googling, and tweaking, I took a break. I washed my face. Then came the necessary reflection.

What are you doing? You're trying to combine multiple learning streams. On the one hand, you're learning the new touch gestures of Windows 8 and where everything has been moved around. On the other hand, you're attempting to actually DO something with WindoZe. So then it clicked. One of the posters HAD said, "any decent browser ..."

And that was the problem. I was using what came installed on the box. I wasn't using a decent product.

So I immediately returned to the laptop, downloaded Firefox, and the font showed. Just for the heck of it, I downloaded all the others -- Chrome, Opera, Safari. All fine. I'd just wasted an hour with WindoZe.

But it's important to have continual reminders. Luckily, I didn't spend that much time on it. I can amortize the time against learning the Windows 8 Touch gestures and re-org.

But you do have to stand in awe, how nothing out-of-the ordinary ever works in a WindoZe environment

After all, why should Microsoft support what every other browser on the face of the planet does?

I Remain,

TheHackerCIO

Wednesday, May 7, 2014

A Good API is Hard to Find


API design was the GeekyBookClub topic last night atDev Talk LA

RESTful Web APIs is the current book, and we had a near overflow turnout for the discussion of chapters 2 and 3.

The book is turning out to be good. But it's the give-and-take of the group that really makes the club work!

Some members have worked ahead, which gives them a bit of an unfair advantage, but TheHackerCIO won't hold that against them. :-)

I particularly liked the fact that the author jumped right in with an actual API to take a look at. A Simple API (chapter 2), in fact, is a micro-blogging API that he uses to illustrate his points. He suggested using wget as a command-line tool to play with it.

I found that I don't have wget available on my MacbookProRetina.

As I mentioned to the group. When someone asks you how long something is going to take, always ask them, "Is this an estimate, or a commitment?" Because management needs to be reminded of this. A lot! Anyway, I curl-ed wget, but I couldn't get it to build because my Xcode was a version or two out, and that download/install seemed to be taking plenty of time. What should have been a 10 minute diversion threw me off for a half hour.

I switched over to simply using Advanced Rest Client, a Chrome extension I highly recommend! You can issue whatever RESTfull calls you wish, with any verb you please, graphically from within your browser, and you can even keep them organized in a file structure for reuse. A very handy tool.

There were too many take-aways from the discussion, but some of the biggest for me were:

1. Being Liberated by Constraints. As the author notes, constraining your REST design, for instance, by using his "personal standard," which requires the JSON to follow some structure, can indeed keep your developers from going wild.

2. I never knew about LINK and UNLINK, but they seem worthwhile. We talked for a while about how POST should be probably be used to create the resource, perhaps with an embedded link to whatever other resource is this resource-target. But then LINK can be used to create a further relationship, going the other direction, from target back to the dependent. (I may be missing something here, because the author tell's me he's going to fill me in on Chapter 11. )

3. PATCH -- looks like a performance enhancement. I'd say to avoid it until absolutely necessary, because it's not idempotent. You're only going to patch a portion of a representation, rather than replacing the whole thing.

4. OPTIONS -- we need to ALL be using this, and straight off the bat, to get the full "billboard" of what we can and should be able to do with our API. No one does. But that doesn't mean we shouldn't start!

5. Overloaded POST -- The author points out how on the web APIs we use POST is overloaded with anything and everything. As he put's it:
The HTTP specification says that POST can be used for:
 Providing a block of data, such as the result of submitting a form, to a data-handling process.
That 'data-handling process' can be anything. It's legal to send any data whatsoever as part of a POST request, for any purpose at all. The definition is so vague that a POST request really has no protocol semantics at all, POST doesn't really mean 'create a new resource'; it means, 'whatever.' [p. 41]

And "Whatever", is never a good thing to mean.

 So don't mean that.

And restrict your POST to creation of new resources with newly-created identifiers.

I Remain,

TheHackerCIO


Tuesday, May 6, 2014

Let's Get Physical!!!!




At Technology Radar Group, last night, TheHackerCIO presented on getting physical.

Physical.

Because it's been too long.

Seeing a lot of hardware hackers at the AT&T Wearables Hackathon, back at years end, was partially a reminder.

But as one member noted. Hardware is how we started.

Several interesting themes emerged from the roundtable. The crucial need to find new strategies for keeping up with technology. The Radar isn't enough. It need augmentation. The lab we have is a wonderful augmentation. We need to figure out ways to capitalize on it. Rick pointed out that with the advances in virtualization technology, we can now use a lab in ways that a decade ago simply weren't possible. We can practically learn/design/plan/test/build  a virtual datacenter with totally agnostic/fungible kit: cisco, dell, IBM, Oracle, Juniper, ... whatever. We can build it out with one set of physical & swap it later. And a major theme I raised was the crucial nature of fighting with the bugs.

The problems need to be highlighted, rather than worked-through. If anything, it's the problem areas where the learning/growth is going to take place. We need to figure out strategies in the lab to track the issues and problems, and get other to face them as well! That's counter to the way it normally happens isn't it?

But it's precisely the contesting with actual concrete problems that brings the abstract designs back to the reality-point. That's what "Closes the loop."

And that, by the way, was the other major theme of my presentation. Every time I've heard grand abstractions presented, and I've been able to force through an actual physical, concrete implementation example, the disconnect between the theory/abstraction and the concrete/implementation has been immense. Enormous. Totally surprising. So much so, that I now almost completely discount abstractions presented in the lack of any supporting test example or demonstration.

One of our further goals for The group, in conjunction with our partner Meetup, L.A. Cloud Engineering Group, will be our attempt to produce a crowd-sourced eval platform -- probably with a Geeky/Social approach -- where in my view, capturing such "proof points," "test cases", "demonstrations," or even "benchmarks" in a repeatable, verifiable way will be a central feature. I might even create a widget/button called  "Prove it, dammit!"

That might go a long way toward getting people to close the loop.

I Remain,

TheHackerCIO
At Technology Radar Group, last night, TheHackerCIO presented on getting physical.

Physical.

Because it's been too long.

Seeing a lot of hardware hackers at the AT&T Wearables Hackathon, back at years end, was partially a reminder.

But as one member noted. Hardware is how we started.

Several interesting themes emerged from the roundtable. The crucial need to find new strategies for keeping up with technology. The Radar isn't enough. It need augmentation. The lab we have is a wonderful augmentation. We need to figure out ways to capitalize on it. Rick pointed out that with the advances in virtualization technology, we can now use a lab in ways that a decade ago simply weren't possible. We can practically learn/design/plan/test/build  a virtual datacenter with totally agnostic/fungible kit: cisco, dell, IBM, Oracle, Juniper, ... whatever. We can build it out with one set of physical & swap it later. And a major theme I raised was the crucial nature of fighting with the bugs.

The problems need to be highlighted, rather than worked-through. If anything, it's the problem areas where the learning/growth is going to take place. We need to figure out strategies in the lab to track the issues and problems, and get other to face them as well! That's counter to the way it normally happens isn't it?

But it's precisely the contesting with actual concrete problems that brings the abstract designs back to the reality-point. That's what "Closes the loop."

And that, by the way, was the other major theme of my presentation. Every time I've heard grand abstractions presented, and I've been able to force through an actual physical, concrete implementation example, the disconnect between the theory/abstraction and the concrete/implementation has been immense. Enormous. Totally surprising. So much so, that I now almost completely discount abstractions presented in the lack of any supporting test example or demonstration.

One of our further goals for The group, in conjunction with our partner Meetup, L.A. Cloud Engineering Group, will be our attempt to produce a crowd-sourced eval platform -- probably with a Geeky/Social approach -- where in my view, capturing such "proof points," "test cases", "demonstrations," or even "benchmarks" in a repeatable, verifiable way will be a central feature. I might even create a widget/button called  "Prove it, dammit!"

That might go a long way toward getting people to close the loop.

I Remain,

TheHackerCIO

At Technology Radar Group, last night, TheHackerCIO presented on getting physical.

Physical.

Because it's been too long.

Seeing a lot of hardware hackers at the AT&T Wearables Hackathon, back at years end, was partially a reminder.

But as one member noted. Hardware is how we started.

Several interesting themes emerged from the roundtable. The crucial need to find new strategies for keeping up with technology. The Radar isn't enough. It need augmentation. The lab we have is a wonderful augmentation. We need to figure out ways to capitalize on it. Rick pointed out that with the advances in virtualization technology, we can now use a lab in ways that a decade ago simply weren't possible. We can practically learn/design/plan/test/build  a virtual datacenter with totally agnostic/fungible kit: cisco, dell, IBM, Oracle, Juniper, ... whatever. We can build it out with one set of physical & swap it later. And a major theme I raised was the crucial nature of fighting with the bugs.

The problems need to be highlighted, rather than worked-through. If anything, it's the problem areas where the learning/growth is going to take place. We need to figure out strategies in the lab to track the issues and problems, and get other to face them as well! That's counter to the way it normally happens isn't it?

But it's precisely the contesting with actual concrete problems that brings the abstract designs back to the reality-point. That's what "Closes the loop."

And that, by the way, was the other major theme of my presentation. Every time I've heard grand abstractions presented, and I've been able to force through an actual physical, concrete implementation example, the disconnect between the theory/abstraction and the concrete/implementation has been immense. Enormous. Totally surprising. So much so, that I now almost completely discount abstractions presented in the lack of any supporting test example or demonstration.

One of our further goals for The group, in conjunction with our partner Meetup, L.A. Cloud Engineering Group, will be our attempt to produce a crowd-sourced eval platform -- probably with a Geeky/Social approach -- where in my view, capturing such "proof points," "test cases", "demonstrations," or even "benchmarks" in a repeatable, verifiable way will be a central feature. I might even create a widget/button called  "Prove it, dammit!"

That might go a long way toward getting people to close the loop.

I Remain,

TheHackerCIO

Monday, May 5, 2014

Jepson & Distributed DataStores


Kyle Kingsbury is doing an amazing job with his Jepson project. TheHackerCIO has been long disturbed by the tendency for people to make these assertions and claims without the experimental evidence to back them up or provide an assessment basis for them.

Especially in the database world.

Here are a handful of the problems:

I can't tell you how many time's I've heard, "Oh, in the inner-join using RDBMS X, a nested-loop algorithm will of course perform better depending on the which table is the outer and which is the inner."

No doubt.

But these DBMSs have an optimizer. They have tables full of statistics about the data, presumable updated on a regular basis. These vendors have had 20 years to tweak optimizations. Yet, the documentation gives no indication as to whether their "optimizer" can pick the right outer table and inner table, or whether you must explicitly pick the right one yourself.

So lots of people just assume that the optimizer can/will do this. Which isn't unreasonable.

But the days have come where things need to be specified tighter.

We simply need clear black/white, preferably not greatly hedged,  statements in the documentation. Statements that can be tested. Verified. Proven. Or disproven.

The newer world of NoSql is no exception to this rule or problem.

But Kyle has been there.

Kyle got interested in understanding the issues around the NoSql databases. But he did things the right way: he set up a controlled environment, and began systematically testing, examining, and proving out how the CAP theorem implications actually work in a partitioning environment. This led to a number of surprises for the vendors, ... not to mention the users???

You can take a look at his full Jepson Project here. He's tested Cassandra (My current focus), Redis, Kafka, NuoDb, Zookeeper, Riak, Mongo, Postgress, possibly others ...

To get a proper sense for this correct, test-based approach, recommend this.  Here are just a few enticing flavor notes, taken from a section to please devote your most careful attention, entitled, "Testing Partitions":

  • Theory bounds a design space, but real software may not achieve those bounds. We need to test a system's behavior to really understand how it behaves.
  • To cause a partition, you'll need a way to drop or delay messages: for instance, with firewall rules. 
  • Running these commands repeatably on several hosts takes a little bit of work.

Work might be a necessary evil. But understanding isn't going to come without it. Or without actually, experimental testing.

In this article, you will see exactly what to set up to get started with your own multi-node, partition-able, experimental test-bed, within which you can see how your NoSql is going to behave.

Because there's no short-cut.

Or, as earlier time might have put it,

There is no royal road to enlightenment.

I Remain,

TheHackerCIO




Friday, May 2, 2014

The Great Resume Rewrite



I spent all day writing.

But it had to be done.

My work. My way. My resume.

Because MWMW ("My work, done My way") is the Golden Key to happiness. (But we'll blog about about that another time).

I don't have very many regrets in life, but one of them is that I didn't reject the irrational advice and conventions that got imposed on my resume through the years, both by well-meaning and trusted advisors, recruiters, employers, and so forth. That will never happen again. Because from now on, it's MWMY. You know what I'm talking about:
  • Ridiculous and contrived stylistic conventions, such as omitting the subject of a bullet point, because that subject happens to be yourself, repeatedly.
  • Boilerplate text, cliches, corporate-speak, and management buzzwords. 
  • Evasion, manipulation, and "spin."
  • Arbitrary length prescriptions.
  • A bureaucratic tone attempting convey objectivity and solidity, but in reality conveying stodginess and stupidity.
So I took out my resume, started right from the top and rewrote it straight the way down. I found I had to do in in several passes. Each time I came back to the top and restarted, it got better. And it just keep getting better and better.

I mentioned before, in my post where I resolve to do this rewrite, that I would report back on the results. To quote myself, which seems fun:
Contrarian resume is coming. I'm going to be 100 percent grammatical, spelling out the first person for every one of my many accomplishments. It's all about me, as indeed it should be, being my resume, so "I" and "me" are going to be a major element. (I'll report here later with the final count, so be sure to check back here in a couple of days.) I also will pay no attention to length at all. I will only concern myself with showcasing relevant experience. If I fall one word over to another page, that's the way it's going to be distributed. If it takes 10 pages, so be it. I'm going to highlight my short-term projects, as well, by explaining that the instability was the clients, not mine. I might even mention that I resent them for it.

In short it's going to be fresh, honest, clear, relevant, personal, non-deceitful. It's going to be a blast to write it.
So this is fun time;  review time!

And, there are all kinds of interesting results and things to reflect on, as well! For instance, have you ever noticed how easy it is to just glibly throw out a statement about something you're going to do, and then, later, you find out that "the devil is in the details?"  This coming note is a great example of that, and a good illustration to pass on to management about how surprises often arise, despite our best intentions and often things are not as simple as we always assume.

I realized, in attempting to count my use of "I", the following:

  • Simple word-finding/word-counting doesn't work well on the document, because "I" is a letter component of many words, and gets caught by the count programs in places where it should not get counted.
  • Me, however, also needs to be counted.
  • So also, do contractions, such as:
    • I'm
    • I'd
    • I'll
I ended up hand-counting about 60 self-references using variations of the 1st person-pronoun. So that's pretty good. I imagine that would raise a red-flag with the scanning software that started me down this pathway. 

The length of the resume, including all the relevant material I wished to include ran well over the conventional recommendations.  I hit 4 pages, double the conventional Whiz-dumb. 

Just to establish that these conventions are commonplace, and held even by the best of people, here is what Cracking the Coding interview says -- a book I highly recommend, by the way, not for it's resume advice, but for it's approach to career development and interview questions:


In the US, it is strongly advised to keep a resume to one page if you have less than ten years experience, and no more than two pages otherwise. Why is this? Here are two great reasons:
  • Recruiters only spend a fixed amount of time (about 20 seconds) looking at your resume. If you limit the content to the most impressive items, the recruiter is sure to see them. Adding additional items just distracts the recruiter from what you'd really like them to see.
  • Some people just flat out refuse to read long resumes. Do you really want to risk having your resume tossed for this reason.
If you are thinking right now that you have too much experience and can't fit it all on one page, trust me you can. Everyone says this at first. Long resumes are not a reflection of having tons of experience; they're a reflection of not understanding how to prioritize content.
As you saw in my former posting, I'm a contrarian about this advice. Does that mean that I'm not reasonable about it? Far from it! Here is the logic of the way I think, formulated into a response to the above:

Yes, I'm in US, and I do indeed wish I could be in the UK instead, where a Curriculum Vitae is commonplace. But since we're stuck with the US, and I agree with your assessment that recruiters, and for that matter, the actual people hiring spend only 20 seconds or less, "looking at [my] resume." I will note a point or two. 
I tend to think that far less than 20 seconds is spent. Probably between zero and seven seconds. I base this estimate on the kind of queries I receive by email from recruiters who have "searched" my resume. Further support for this estimate comes from the number one, almost universal first question which hiring interviewers always ask: "I know I have your resume here, BUT would you please describe your recent experience for me first, before we get started discussing the details ..."  
If I limit the content, as you suggest, the recruiter (or hirer) is still NOT going to see the most impressive items. He's never going to see them. Adding additional items will NOT be a distraction, because there has never been attention focused on the document in the first place. 
Furthermore,  if someone "flat-out" refuses to read a "long" resume -- and I challenge the notion that ANYTHING over a page or two can be, by any stretch of the imagination, considered "long," I want them to toss it. 
I'm interested in working with people who are able to  read, like to read, and, frankly, are not burdened by supporting detail and explanation. By the way, will the analysis documentation and specification of the software I produce be less than a page or two? Does it show a failure to properly delimit, prioritize and hierarchically organize if my documentation is longer than a page or two?
Finally, I would like to address the issue mentioned about prioritizing content. I agree that prioritizing content is extremely important. And for this reason, I do employ the reverse chronological method which is common and conventional. Experience becomes increasingly irrelevant as it ages, and so this approach allows a natural and organic tapering of emphasis. The reader can decide for himself at what point the experience becomes no longer relevant for consideration. Yet he can, if he wishes, follow the logic of the career path back from it's origins. Since this convention has a rationale, I will retain it.
After adding myself back into my resume and adopting a human tone, my resume is such a pleasure to read. I read every word of it out loud to myself before sending it on to the recruiter. The honesty of the whole document is just so refreshing. For instance, here are two of my favorite snippets:
  • When this naughty Government Sponsored Entity got caught cooking the books, I was brought in to provide a rules-engine based effort to re-cook those books, by reposting every trade from the bond model to the sub ledger and general ledger.
  • Naturally, my on-shore team had to re-develop everything “delivered” by the offshore team. 

I Remain,

TheHackerCIO




Some REST for the Wicked



The wicked coders at Dev Talk LA started a REST book the night before last. Discussion was amazing. First we covered the paper on functional programming, which we didn't get to last week. This paper is a response to another paper, so you should read both. Then we did chapter 1 of the REST book.

One fellow geek brought his date to the Meetup -- this was his date night entertainment! I hate getting pushed off my UberGeek pedestal, but stuff happens! I look forward to seeing the happy couple at future Meetups.

I was kind of blindsided by one discussion, which mentioned "traits" with Java. I never heard of them. That's pretty weird for a java techno-maven such as myself, so I need to follow that up with him. And Google didn't help me out this morning, either. It seems "traits" might be another term for "mixins," which were found in some OO environments, but never in Java. Mixins were/are a kind of inheritance where multiple inheritance didn't quite fit the bill. I'm not sure if he's thinking of AOP and mixing the notion in with traits, but it's something to check up on. Or it could be some kind of a library that offers this kind of support in the java ecosystem.

Someone else mentioned GPars, a concurrency framework used in Groovy quite a bit. I don't know that much about Groovy, so this one  has to be put down on my "to be investigated list." I already spent a few minutes Googling it, and it appears to be an implementation of the "actors" abstraction from Scala and Erlang, but in the Groovy/Java world. I definitely need to spend some time playing with Groovy and taking advantage of it's type-inferencing, so I can avoid boilerplate code more. That just needs to happen. Because speed matters.

Another member highly recommended a concurrency talk by Doug Lea. From what he said, I believe that this is the talk.

Moving on to the REST book, the bid system got completely hijacked! Normally, everyone offers page "bids," and the lowest bid wins. But this time, one of our members just jumped in and pushed the last page of the chapter as the opening round!

The discussion focused on a Members key problem with REST for him in his work and in general., which was to get the human readable and machine readable representations integrated. For instance, he hates the whole embedding of versions into the URL for APIs. This is ugly and hard to support. Instead, if you traverse from the top of the hierarchy, and have clients parse a page of "billboard" information, which describes the kinds of additional services and methods available in the API (at that version), then you can completely avoid this.  You are basically building the API spec into the page returned, which the client reads, scrapes, and uses to invoke the various additional API calls.

The problem is, this scheme doesn't allow permalinks and the use of bookmarked pages in the browser. You have to start at the top of the hierarchy and traverse it, because the site might have been reorganized since the last time your client noted a permalink.

But it occurred to us that you could have a versionless permalink API, that mapped permalinks to whatever new, reorganized location they now properly dwelt in. And that would effectively take care of most of the whole issue.

We still need to get our heads wrapped around the right way to do all of this, but the book promises to help us immensely in that regards.

I Remain,

TheHackerCIO


Tuesday, April 29, 2014

Silicon Valley Via HBO




Television is a hateful thing most of the time for TheHackerCIO. Whenever I'm watching the vapid content, I'm sure to have a laptop with me so I can learn something or try out a technique. BTW, you should always be coding on your laptop: during meetings, TV, listening to a presentation. All those wasted gaps of time can be easily redeemed by coding. It's a tremendously helpful habit.

But last night was different. I downloaded and watched the first episode of Silicon Valley. Geeks everywhere are talking about it, here and here, for instance. I loved it. I'm ordering HBO for the first time.

You can watch the first episode on the HBO Website.

If you're still unable to decode the characters, this page will help you.

In my opinion, they've nailed the culture. They've clearly done their research. Even my kid said, "They're just like those Hackathons you do." And that's reason to love it.

I remain,

TheHackerCIO



Monday, April 28, 2014

A Contrarian Resume



"Contrarians" are a kind of investor who wager against the conventional wisdom. Of course, the words extends beyond just the financial world.

TheHackerCIO is a contrarian.  

He sees no reason to respect conventional wisdom. Historical tradition isn't enough to justify decisions, policies, or edicts. They must be reality based. They must be true. They must make sense. They must have reasons and good ones that justify their prescriptions.  Otherwise, it's time to ignore them, or defy them with everything you've got.

ManagementSpeak, for example is the accepted convention.  Wikipedia explains that:
Corporate jargon, variously known as corporate speakcorporate lingobusiness speakbusiness jargonmanagement speakworkplace jargon, or commercialese, is the jargon often used in large corporationsbureaucracies, and similar workplaces.[1][2] It may be characterised by sometimes-unwieldy elaborations of common English phrases, acting to conceal the real meaning of what is being said. It is contrasted with plain English.
Readers of TheHackerCIO will recognize this as the  language of the Bloated Behemoth Enterprise. One of many pathologies found there.

This one has to be opposed with everything you've got. Why? Because it has evil purposes: dishonesty, evasion, pretense, concealment of truth, deliberate confusion to distract, or at best, a lack of candor, openness, or transparency.

And one of the most pernicious effects of ManagementSpeak is the nearly universal impact on resumes:

  • Boilerplate is mandatory. 
  • Never use a personal pronoun. 
  • Only 3rd person. 
  • Use acronyms to blast the brain. 
  • Ungrammatical Bullet points describing accomplishments which never even feature a subject, but start off with a verb, such as
    • Analyzed requirements with end users
                     instead of
    • I analyzed and discussed requirements and features with key users.
  •  Then, since the result is so mind-stultifying, no more than 2 pages of that, thank you very much.


What brought this to mind was that I applied for a job. The e-submission had a check box for a free resume evaluation, so I unchecked that box, having no desire for more spam to deal with. Naturally, the software ignored my selection and the next morning I had my free evaluation. I immediately clicked the link to unsubscribe, and haven't been troubled again, luckily.

But ...

I read their "eval." ; and it was evil.

They wanted $300 to fix these problems:
1. It was too long.
2. It had a personal pronoun.
3. There were deceitful ways to as they put it "minimize," short term assignments.

Needless to say, TheHackerCIO isn't coughing up hundreds of bucks to get his resume into conventional shitform. But he has been acutely aware that rewrite-time was nearing. Now I just have to do it. Starting as soon as I post this, I'll be doing a full top-to-bottom rewrite.

Contrarian resume is coming. I'm going to be 100 percent grammatical, spelling out the first person for every one of my many accomplishments. It's all about me, as indeed it should be, being my resume, so "I" and "me" are going to be a major element. (I'll report here later with the final count, so be sure to check back here in a couple of days. UPDATE 2014-05-02: I counted at least 60 instances of self-reference, including I, Me, I'll, I'd, and so forth. ) I also will pay no attention to length at all. I will only concern myself with showcasing relevant experience. If I fall one word over to another page, that's the way it's going to be distributed. If it takes 10 pages, so be it. I'm going to highlight my short-term projects, as well, by explaining that the instability was the clients, not mine. I might even mention that I resent them for it.

In short it's going to be fresh, honest, clear, relevant, personal, non-deceitful. It's going to be a blast to write it.

Never yield to evil. It leaves a bad taste. And you won't find happiness that way...

I Remain,

TheHackerCIO









Friday, April 25, 2014

A Headhunter Hoist on His Own Petard



Fridays are unusual.

This morning, particularly so.

TheHackerCIO breakfasted with a Headhunter.

And made him a offer!

Said headhunter was expecting to hunt, but found himself the hunted.

It just doesn't get better than this.

You may wonder why the author of "Why Hackers Hate Headhunters" would like one.

Well this one wanted to understand technology. He asked intelligent questions.  He wanted to meet me face to face. He didn't even text while we ate. He was on time. He was interesting. He was intelligent. He wanted a long term relationship.

In short, he was too good to be a headhunter. So I turned the tables on him, and asked him to consider doing some Business Development on the side for my boutique. We need people like that.

So the upshot is, that the Engineer -- or in this case -- Recruiter was hoist on his own headhunting petard, which is great sport, according to Shakespeare.



I Remain,

TheHackerCIO






Thursday, April 24, 2014

The Apostle of Conceptual Integrity

Conceptual Integrity Part 2

[note: Click here to read Conceptual Integrity Part 1]

TheHackerCIO bit off more than he intended.

After yesterdays post about Conceptual Integrity, he rummaged around the library to dig out The Mythical Man Month, that classic Geek tome by Frederick P. Brooks. While I was at it, I realized that I bought, but never finished reading his follow on work, The Design of Design.

I last read the Mythical Man Month on it's 20th anniversary -- 1995. And the Second edition features an essay in which he notes that the Central argument (of the whole book) is Conceptual Integrity! And, indeed, following the index, there are 13 entries. There are 15 more in The Design of Design. And far from repudiating the idea as outdated, he states in that essay that "Today I am more convinced than ever. Conceptual integrity is central to product quality. "

So, the series is going to have to be lengthier than I had initially imagined. This topic is one of the great ones, and it enters into direct opposition to some rather highly touted modern approaches, namely so-called "Patterns" and "Pattern Languages," and certain agile and TDD approaches, which explicitly reject attempts to architect things, and prefer to write something that "works" and slowly refactor it into a final product.

Today, I'll just put forth the minimal essence of his theory. I can simply use his 2nd anniversary chapter "Propositions of the Mythical Man Month: True or False," which he abstracted and condensed to challenge people to examine and test. I count 7:

  • A sharp team is best -- as few minds as possible.
  • A team of two, with one leader, is often the best use of minds.
  • Conceptual integrity is *the* most important consideration in system design.
  • To achieve conceptual integrity, a design must proceed from one mind or a small group of agreeing minds.
  • Separation of architectural effort from implementation is a very powerful way of getting conceptual integration on very large projects [Small ones too].
  • If a system is to have conceptual integrity, someone must control the concepts. That is an aristocracy that needs no apology.
  • A conceptually integrated system is faster to build and to test.

Compare this to the propositions of Paul Graham:
  • Design usually has to be under the control of a single person to be any good. 
  • You can stick instances of good design together, but within each individual project, one person has to be in control. 
  • After the talking is done, the decision about what to do has to rest with one person.
  • A lot of the most famous scientists seem to have worked alone. 
  • Design by committee is a synonym for bad design.
  • Good design requires a dictator
  • Good design has to be all of a piece
  • If a design represents an idea that fits in one person's head, then the idea will fit in the user's head too.
Forthcoming: Part 3

I remain,

TheHackerCIO

Wednesday, April 23, 2014

Conceptual Integrity Part 1 ... or Why Committees Can't Do Doodly

Last night at DevTalk LA (that Geeky local book club), having finished our book, we read two articles. We do this in-between books, to allow members a little extra time to get the book in-hand. Next week, we'll start the eagerly anticipated RESTful Web APIs by Richardson & Amundsen.

Add caption

This book appears to be a complete rewrite of their earlier RESTFul Web Services (2007). It was, by universal acclamation of Dev Talk LA, one of the better books we've recently read, and advance opinion holds the re-write to take things on the next level and clear up all our questions.

The two articles can be (and should be!) read on-line here:

We spent most of the time on Paul Graham's essay.

If you haven't read Paul Graham, you need to stop reading here and get on with it. Buy his book. Read his essays. Just do it. The short-version of why is this: Paul Graham is a twice-successful Startup enterpreneur who built his success upon technical excellence. He's a big proponent of the rapid development he found possible through the use of Lisp, of which his is an advocate and presently is designing/promoting Arc, a Lisp derivative of his own. But there are two additional factors that make him a compelling figure.

First, he's done something other than technology. He's also a painter. And the interplay of what he has learned from both shows clearly in his very insightful and reflective essays. Secondly, he understands, in a very no-nonsense way the value and importance of passion. From last nights essay, for instance, came this timeless quote:
To make something good, you have to be thinking, "wow, this is really great," not "what a piece of shit; those fools will love it." 
Which brings us to the point of today's essay.

Conceptual Integrity.

It's important.

Really important.

Paul Graham gets it, too.

As he puts it, (highlights mine)
Notice all this time I've been talking about "the designer." Design usually has to be under the control of a single person to be any good. And yet it seems to be possible for several people to collaborate on a research project. This seems to me one of the most interesting differences between research and design. 
There have been famous instances of collaboration in the arts, but most of them seem to have been cases of molecular bonding rather than nuclear fusion. In an opera it's common for one person to write the libretto and another to write the music. And during the Renaissance, journeymen from northern Europe were often employed to do the landscapes in the backgrounds of Italian paintings. But these aren't true collaborations. They're more like examples of Robert Frost's "good fences make good neighbors." You can stick instances of good design together, but within each individual project, one person has to be in control. 
I'm not saying that good design requires that one person think of everything. There's nothing more valuable than the advice of someone whose judgement you trust. But after the talking is done, the decision about what to do has to rest with one person.
Why is it that research can be done by collaborators and design can't? This is an interesting question. I don't know the answer. Perhaps, if design and research converge, the best research is also good design, and in fact can't be done by collaborators. A lot of the most famous scientists seem to have worked alone. [ed. note: see my "Never Hire The Greatest Scientist The World Has Ever Known"] But I don't know enough to say whether there is a pattern here. It could be simply that many famous scientists worked when collaboration was less common. 
Whatever the story is in the sciences, true collaboration seems to be vanishingly rare in the arts. Design by committee is a synonym for bad design. Why is that so? Is there some way to beat this limitation? 
I'm inclined to think there isn't-- that good design requires a dictator. One reason is that good design has to be all of a piece. Design is not just for humans, but for individual humans. If a design represents an idea that fits in one person's head, then the idea will fit in the user's head too.

I wanted to collect these ideas into one place and last night was a wonderful impetus to do so.  But this posting is already too long, so it looks like we have to kick off a series. And what a wonderful topic for a series! Conceptual Integrity. And, I'm very happy to have one of my heroes, Paul Graham, give as forceful and thoughtful a kickoff as could be imagined.

I Remain,

TheHackerCIO







Tuesday, April 22, 2014

Start With a Soldering Iron


Close followers of TheHackerCIO will know that he's in major retooling mode. Fresh back from Karate in Japan, he's retooling not only his Kata, but his technology. He's donned the white belt for a fresh look at tech from hardware up.

From the basics. The fundamentals.

It was increasingly clear from last year that I needed to get hardware back in my life.  Sitting in our CIO and CTO offices, listening to our classical music leaves us far too detached. We need to get physical, physical. We need to get to the hardware. At the AT&T Hackathon several months ago, the hardware hackers impressed and inspired me with the "wearables" they concocted.  And now, it's easier than ever to get involved with Rasberry Pi -- whatever your age -- and do some interesting hardware/software projects that interact with the environment in interesting ways.

I wish I still had the URL to an essay I read years ago about how to become a "guru" at programming language X. [I no longer remember the exact question, or language, and Google hasn't helped source it]

The advice given, I'll never forget:

1. Start with a soldering iron ...
2. move on to mastering operating systems ...
3. now learn networking ...
4. and assembler ...
5. Start working up the High Level Language Stack. A lot of optionality here. Perhaps:
    C
    Java [forget C++]
    Python
    Lisp

Which short-list gives us a good basis in procedural, and then functional languages. Maybe throw in Prolog for a declarative language.

There are other considerations, of course, but this makes a good overall syllabus. And it's more or less the program I'm embarking on for the next good while.

Bought an Asus laptop as working fodder for the review: I'll start by picking up the new-to-me windows 8 touch-screen nomenclature and interface, then re-partition it to become a dual-boot ArchLinux and Windows box.

I already learned that, unsurprisingly, as CDs and DVDs are increasingly scare on laptops, recovery disks in Windows are now just USB sticks. And, they only take 512M, which easily fit on the 7G stick someone was nice enough to give me at the SCALE 12 conference last month. All of this is good to know, and once again, helps keep everything real.

Keeping It Real,

TheHackerCIO

Monday, April 21, 2014

WhiteBelt Ninja


Karate has increasingly influenced TheHackerCIO's approach to work. Now freshly returned from training with the Grandmaster in Japan, where I achieved Ichi Kyu (1st Degree Brown Belt, one rank below Black Belt), and where my daughter also made Shodan (Black Belt), there is a tradition enforced at our Dojo which speaks volumes.

A new Black Belt must spend the first month wearing a White Belt.

That's right. You line up in belt-order back at the very beginning. The place where you entered. It isn't so much about "humbling," you, or keeping you from getting "puffed up."

As the GrandMaster says, "Black Belt means now you learn *real* Karate."

And, in essence, it's time to rebuild all your skills from the basics on up  -- reworking and retooling them in accordance with how you *should have* learned them in the first place, had you known then what you know now.

I'm experiencing a little of this just as an Ichi Kyu. And so, I took advantage of the fact that my training journal -- now 4 years old -- was notably lacking in pages; not to mention the fact of my newly achieved level, to establish a new training journal and Commonplace Book.

I haven't blogged about Commonplace Books -- they are really just an advanced journaling technique (that, interestingly, derives from ancient traditional learning methods dating back to the Greco-Romans and that were commonly employed up through the Age of Enlightenment) They make an excellent organizational device that improves techno-journals and they very easily coexist with regular journals. I'll blog about how to use them one day.

But not only am I using this refreshed mindset to improve my Karate and discipline.

I'm applying what I have learned from Karate to the rest of my life. The discipline must spill over, if it's to have any lasting impact on a life.

So, today, I went out and got a latest book for the Comp TIA A+ Certification. I know, it almost seems ridiculous -- what am I going to go to work fixing computers? But really, every ten years or so, you ought to go back and systematically revise what's been going on in the hardware world.  From there, I'll go back to retool on Java 8, or possibly Linux.

It's great being a White Belt.

Starting From the Beginning,

TheHackerCIO

Friday, April 18, 2014

Additional Complexity, Enemy of Basic Purpose


Religion rarely intersects with technology. But many years ago, TheHackerCIO was struck by a parable from some religious presentation. It captures a problem and capsulizes an issue in such a striking way, that I mention it today. The author is, apparently, anonymous, so Google for a source:

The Parable of the Lighthouse

The two men stood on the high cliffs, "What do you think John?" The other man listened to the night and answered, "It looks like we might have another one." The first man nodded his head, "Aye. And it's the third shipwreck this month. I'd best get the crew."
He ran down the beach to a small lighthouse and pounded on the door. "Time to be moving boys; there's a wreck up near the north cliffs." The life-saving crew tumbled out of the lighthouse, and plunged their little boat into the waves with amazing skill.
Such tragedies often struck that lonely coastline; a sudden shift in the winds, or a thick fog rolling across the water and an unlucky ship would slam into the reefs, its hull slashed by the rocks.
The cry would go out, "Abandon ship!", but no sooner had these words left the captain's lips than the lantern of a life-saving boat would appear in the darkness, leading the wrecked seamen to safety.
The little lighthouse soon grew famous. Each day, it seemed, there came a new knock on the door: "I've come to help! You saved my son's life and I want to be part of your crew!" or "Please take this small sum as a sign of my gratitude."
The lighthouse was a crude little life-saving station. The building was just a hut, and there was only one boat. But the few devoted members kept a constant watch over the sea, and with no thought for themselves they went out day and night, tirelessly searching for the lost.
Many lives were saved by this wonderful little station. Some of those saved, and various others in the surrounding area, wanted to become associated with the station and gave of their time, money, and effort for the support of its work. New boats were bought and new crews were trained. The little life-saving station grew.
Some of the new members of the life-saving station were unhappy that the building was so crude and so poorly equipped. They felt that a more comfortable place should be provided as the first refuge of those saved from the sea.
So they replaced the emergency cots with beds, and put better furniture in an enlarged building. Now the life-saving station became a popular gathering place for its members, and they re-decorated it beautifully and furnished it as a sort of club. And so the Lighthouse Society was formed.
Less of the members were now interested in going to sea on life-saving missions, so they hired life boat crews to do this work. The mission of life-saving was still paid lip-service, but most were too busy or lacked the necessary commitment to personally take part in the life-saving activities.
The years passed, then one rainy night the Society was holding its annual formal dinner. The guests were dining by candlelight and dancing to a string quartet when suddenly, "Look, a red flare over the sea!"
The hired crews brought in boat loads of cold, wet, and half-drowned people. They were dirty and sick, and some of them had black skin, and others spoke a strange language. The beautiful new club was considerably messed up, so the property committee had a shower house built outside the club where victims of a shipwreck could be cleaned up before coming inside.
At the next meeting, there was a split in the club membership. Most of the members wanted to stop the club's life-saving activities as being unpleasant, and a hindrance to the normal life pattern of the club.
But some members insisted that life-saving was their primary purpose and pointed out that they were still called a “life-saving” station. They were finally voted down and told that if they wanted to save the life of all the various kinds of people who were shipwrecked in those waters, they could begin their own life-saving station further down the coast. So they did.
As the years went by, the new station experienced the same changes that had occurred in the old. They evolved into a club, and yet another life-saving station was founded.
If you visit the seacoast today you will find a number of exclusive clubs along that shore. Shipwrecks are still frequent in those waters, only now most of the people drown.

I particularly love the happy ending!

This is, of course, what has been happening over the last two decades of java's development. We've seen the hyper complexity of the EJB 2.0 grow, then fall by the wayside as POJO simplified things a bit. But we've also seen the hideous addition of Generics with full backward compatibility to nonsense, and with v. 6 we got the fragmentation of annotations alongside of xml.

I hate xml, so I'm not unhappy to see annotations as a replacement. But when everything gets larded in as an option, the complexity just gets harder and harder to track. Used to be, you had a lone web.xml file to grasp, to figure out your deployment descriptor. Now you have to check each web-fragment.xml, together with their "orderings," as well as the web.xml, and it's "absolute orderings," not to mention all of these, in turn can use annotations and the Java configuration API.

Did we really need all those options, just to deploy a Web Application? Or is it just what I call the Creeping Monster of Unnecessary Complexity?

I Remain, Going Back to Basics, as ...

TheHackerCIO

Thursday, April 17, 2014

The Tribulations of Developers


... more about the book our GeekyBookClub chose last time around. I'd have voted against it!

DevTalkLA at last Tuesday's meeting wrapped up the last chapter of Data & Reality, a supposed classic by William Kent. The book proved highly disappointing. It's a kind of skeptical look at data modeling. Skeptics are generally annoying. If data modeling can't be done (objectively), what kind of practical advice does he have to offer us for doing it? Why bother?

This, for example, is from his concluding chapter, entitled "Philosophy":
"So, at bottom, we come to this duality. In an absolute sense, there is no singular objective reality.  But we can share a common enough view of it for most of our working purposes, so that reality does appear to be objective and stable." [p. 149]
What rot! If there is no objective reality at the base, how the devil can anyone share a "common enough view" of such a non-existent-thing in order for it to magically appear to be a thing, let alone an objective and stable thing?

And if we operate on the (supposedly naive) view that there is an objective reality at the base of what we do, and we do this because we *have to*, then what is the point of removing the naiveté?

Anyway, luckily, this rubbish book is now behind the group. And, also, the side-bar conversations were stimulating as always.

In fact, the great lament of the book club was related to timing. One member noted that the next book, RESTFull Web APIs, which looks extremely promising, was wonderful, but he needed to have read it several months ago, when he encountered these issues!

And TheHackerCIO, also had similar experience, with the Java Performance Book. I came to know about the book through DevTalkLA, and when I got it I realized that the client I had been building a performance engineering environment for would have gotten an order of magnitude better job out of me, if only I had read the book at the time I began the project!

But, such lamentations, trials, and tribulations are part and parcel of the life of the Hacker. We can only know about what we know at the time we know it.

I Remain,

TheHackerCIO

Wednesday, April 16, 2014

Continuous Improvement for the Self


Yesterday, in discussing the "Close the Loop" principle, TheHackerCIO touched on how one is worse off for not addressing deficiencies. It's simple, really. Not only do you now have a known deficiency, but you also know that  you are the kind of person who doesn't fix problems. (At least with respect to this problem.) And that is setting you down a course of habitual non-fixing problems, which is nothing but a downward spiral.

Don't go there.

There are many applications. For instance, how many times, O fellow Hackers, are you compelled by deadline-pressure and/or management to employ what we might humorously call a "sub-obtimal" solution or approach. In other words, we put in a hack, or a "work-around," or take on "technical debt."

Now, the reality of life is that this is always going to be there. But it should be tracked and a solution found for the future. This is they way to move yourself toward the elimination of such issues.

I keep a page in my client-journal where I track everything I've done that I'm unsatisfied with. I just label it my "Technical Debt" page, and make sure I log it. And, naturally, from time to time, when time permits, I come up with remediation approaches and solutions. Even if I can't get them into production, I've at least "closed the loop" on the issue. And so, on a personal level, I have improved myself. Which, by the way, is a very selfish thing -- in the best possible sense.

I hope you too choose to improve yourself. You'll find yourself a much better technologist for it.

I Remain,

TheHackerCIO


Tuesday, April 15, 2014

Closing the Loop



Recently a junior developer asked me one of those rarest of questions: the questions about how to improve. He's a very talented developer; and in a few years he's going to be awesome. But there was one chink in his armor I thought he should address.

And there are plenty of others who would benefit from this advice.

As a matter of fact, I could produce a major list of ex-clients and the vast majority had never grasped it. 

The principle is very simple. But the benefit comes only from the rigorous, regular, -- even habitual --  application of it.

The principle is: always close the loop. 

Here are a few examples.

If you go for an interview, afterwards you must sit down an and write up every missed question and failure. You must assess the gaps in your knowledge, or presentation and close the loop. Closing that loop might involve learning the answer to a question you missed. It might involve developing a small sample program to recursively traverse b-tree, because that's what you messed up on.  It might be a matter of reflecting on all your past experiences and determining what the nastiest bug you ever encountered was -- because you drew a blank when hit with that question. The point is, that you must always close the loop. Address deficiencies so that they are never repeated.  By following this policy, you will root out deficiencies and remediate them. 

If you fail to close the loop, you will undercut your self-confidence. Your own mind will know, implicitly, that you didn't take care of business, and that you have a deficiency of which you have not undertaken remediation. That means, without closing the loop, you are worse off for your bad performance. Not only do you have a flaw, but you know it, and you didn't care enough to fix it, or start fixing it.

The violation of the closed-loop principle seems institutionalized in many companies. "Let's move on, " they say. "There's no point in doing a full-blown investigation of why this happened." No?  Seems like the point would be identification of the problems that led to the debacle, followed by instituting a plan to avoid it in the future. Without closing the loop -- doing a post-mortem examination of failure -- assigning blame, where blame is due -- every future action is subject to the same causes. And, naturally, the same effects. 

So alway, always close the loop. And of course, you can use your techno-journal to help.

I Remain,

TheHackerCIO