A few weeks ago a software developer came to me with a problem with some code. My suggestion was to do X. The developer's response was, “No one ever taught me how to do that.” To put this in context my suggestion was along the lines of: “What do the logs indicate or have you looked at a stack trace?” I was not asking the developer to do anything outside of the realm of the normal SDLC.
Skip forward a few days when I provided a developer with a script he could use to do some database manipulation. It was SQL 101 but rather than having the developer look it up I figured it would be quicker to just type it out and send it over. I added the caveat that he would need to modify the code to work in the particular environment in which he was working.
An hour later he was hanging over my desk complaining that my script didn't work. So I walked away from what I was doing to see what the problem was. Sure enough a reference in the script to an object on the file system was never modified to fit the environment. I was embarrassed for the developer when I had to point out that “h:mojobob” did not exist in their development environment. But he wasn't embarrassed at all. It was my fault because I didn't tailor the script for his specific use.
These are not unusual occurrences. We routinely graduate computer science majors who are unable to write a line of code. I guess there is no harm in that but why do we then allow them to work as developers? There was a time that being a software developer had a certain panache— it described an individual who had an inquisitive mind and an ability to think critically. Not so much anymore. Copying a jQuery script from a blog now seems to qualify you as a software developer.
There are still savvy, creative developers out there, but they end up being drowned in a sea of mediocrity. And they end up doing all the real work while everyone is tossing around banalities like “agile development” or “root-cause analysis” while sitting around in daily standup meetings hoping they don't have to do any real work. How did we get to this state? There are a number of reasons: a demand for developers that has outstripped supply, shoddy educational systems, generational changes in work ethic, etc. But the primary factor is the increasing complexity of data systems and the higher levels of abstraction that result from that complexity.
I was a Navy officer during the Vietnam War. I was assigned to a ship as operations officer. It was a choice assignment. I was coming off a destroyer where I was an engineering officer. My battle station was in the forward fire room standing in front of a pair of boilers filled with 900 degree steam at 600 PSI. Live steam is invisible and will cut a man in half. Not a great place to be should something go wrong.
Electronics had been a childhood hobby of mine and I had built a slew of radios, amplifiers and other devices from electronic parts I salvaged from old televisions. I even built a basic AND/NAND gate machine. Back then electronic chassis were hand wired so an enterprising young person could build a huge inventory of electronic parts from a half-dozen old TV's, which the repair shops would give to me for free.
As operations officer I was in charge of the radio room (known as the radio shack in those pre-Tandy days). One day it was reported that one of our long-wave receivers was down and that my team was going to repair it. I eagerly joined the electronic technician figuring I could help him diagnose the problem. I looked around the space we were in and noticed no multi-meter, no oscilloscope, no schematic diagram—none of the basic tools I would use to start trouble shooting.
I asked about this and the sailor gave me a big grin and said, “We don't do any of that. We just swap parts in and out until it starts working.” So much for all of my knowledge of electronics.
Transistors and circuit boards made it extremely difficult for individuals to get into the heart of electronic machines and manipulate what happened inside. Complex machines must be abstracted to be properly maintained. Testing and replacing individual components like diodes or resistors is not scalable. Replacing components or modules is.
It was inevitable that components became bigger and more complex. As they became more complex, efficiencies of design and scale came into play. A pluggable power supply or pre-amp became modules that could be used across a wide range of electronic devices. The modularization of electronics allowed for mass produced, less expensive machines. It also abstracted understanding of how the machine worked. The preamplifier becomes a black box with x inputs and y outputs and that is all an “engineer” needs to know to use it to build another machine. And that reminds me of the current state of software development.
By the 1970's mainframe development had become fairly stagnant and unexciting using high-level languages like COBOL, FORTRAN, RPG, etc. These were and are totally cookbook languages. Not a lot of creative thought needed to write a COBOL program. If you needed to do anything different with a mainframe you needed to drop down to assembler, which was fun and challenging but required thousands of lines of code to do anything. Mainframes were not something bright young engineers and scientists were drawn to. Minicomputers, microcomputers and personal computers were compelling and provided the genesis for the current wave of software development.
First and second generation interactive computers opened up computer science and programming to a lot of frustrated geeks who disliked the inability to get inside the machine and hated the onerous overtones of big business and big government that mainframes had come to signify. It was no accident that Tracy Kidder's Pulitzer Prize-winning book (1981) about the creation of a minicomputer was titled The Soul of a New Machine. Smart young people wanted to control machines from the inside…not just interact with them from the outside.
Thirty-one years after The Soul of a New Machine, software development is at that state of abstraction that electronics engineering has been for years. I don't see a lot of software developers creating code that writes directly to the kernel of the operating system. Software development for early PC's was pretty much written to bare metal, the processor, and a very light operating system.
Windows 3.1 and Charles Petzold provided a layer of abstraction on top of that, which still felt close to the machine. Using C you could still access machine code if you needed to do something more efficient than Windows was able. Open-source Linux offered the ability to control how your software worked with bare metal, but for the most part Linux users just operate with someone else's code—sometimes free, but more often commercial.
After that we layered application platforms on top of the operating system and software developers were not even aware of the OS except as something that must be configured correctly. Application platforms come with cookbooks that form the basis for “programming” on the platform. Programmers are so stuck in their particular niche that thinking outside the box is not only a stupid cliché, it isn't even possible.
I was peripherally involved with a project involving mobile access to a web (browser-based) application. The code was all written in C# using .NET and various Microsoft API's as well as Java running on Linux. Everything worthwhile was exposed as a RESTful web service. The folks doing the mobile application were writing their application using native iOS code. They did not understand how to consume the RESTful services nor were they able to consume any of the security objects exposed for the application.
Their solution was that “someone else” needed to write middleware that did all the heavy lifting. The middleware was supposed to provide the mobile platform with an exposed API that was essentially pluggable from iOS. Unbelievable. They write mobile code but they are unable to connect to any existing systems. I guess that might work if your goal in life is to write Angry Bird clones, but that just won't work for end-to-end business systems.
We have created a generation of software developers that are not much different than burger flippers or cooks at a chain restaurant. They follow precise recipes created by others to “write” code. Suppose they need to write a pluggable routine to modify a customer profile. They already have an example of how to create a new module for the CRM system. They do an Internet search for code to modify a customer profile for that CRM system. Then they start trial and error. They keep changing this and that until they get it to work. The Internet is their friend as they search for each new error message to find an answer to their problem.
Once they succeed the code is locked and loaded. The developer fears any modifications to the code because they don't really understand what their code is doing. They are writing to an API for the CRM system which is a black box using code they didn't create. This is the inevitable result of black-box coding. No one except for the product team knows what is in that box and getting to that product team is harder than getting a ride on the Concorde.
I once had a significant problem with code I had written to a Microsoft application. After weeks of jumping from one level of premier support to another and calls from my CIO I finally was granted a call from the lead on the product team. When I posited my question he laughed and said, “Yeah, that won't work, but this will.” It took him less than five minutes to help me. No one else on my long path to that call had real knowledge of what was in that black box. We are becoming very close to the state where no one will know how to build the machines we depend on.
One more thing about that CRM problem above: It would have been much easier to create a small application and write directly to the database in a non-invasive way. But that would have violated the licensing agreement with the CRM vendor, which would mean no support. That leaves us stuck with using their black box API or engaging them to do a product modification or add-on.
Of course, there are other reasons why software development is in such a sad state. Demand truly has outpaced supply and many developer positions are filled with people who just don't have the skill set or mental capacity to do it. And it isn't as exciting a profession as it was 20 years ago. The coding part of software development is not all that thrilling. The fun part is in designing the system itself and creating the algorithms for the code. Unfortunately when all a developer can do is code against an existing API there isn't a lot of room to incorporate clever, creative algorithms.
That being said, we still haven't explained why there are some exceptional developers who always amaze and always perform. I think the answer is that there have always been killer coders out there. They are just harder to find among all the rabble.
Please address comments, complaints, and suggestions to the author at [email protected].
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.