Credit: Jesse The Traveler / Flickr
I was lucky. I learned IT in an incredibly immersive way. My first two jobs were in organizations that followed the very best practices for their day. Because it was all I knew, I considered that to be normal. I had no idea how unique those organizations were. I didn't know at the time that the rest of the industry would not adopt these techniques for a decade or more.
My next career moves brought me in contact with organizations that did not adhere to the same best practices, nor any others. In fact, they were unaware that such best practices existed at all. I considered this to be a bug and went about fixing it, dismayed that anyone would settle for anything else. I was re-creating what I considered "normal."
Thank you for this wonderful article. CS should absolutely be taught this way!
Teaching best practices in University is an interesting idea but I believe that it should be applied with care. Computing Science is not the same as software development just as mathematics is not the same as calculating. If too much emphasis is placed on best practices, then the core theoretical foundations of computing may be neglected. It would be overkill to impose IDE, tool chain, and methodology requirements for a class in algorithms in which the objective is to learn complexity theory. Care must be taken to ensure students do not come away with a firm grasp of the process but a tenuous grasp on the theory. This is why there exists both computer science and software engineering programs; students following one stream will have different expectations and objectives than those following the other.
Displaying all 2 comments