This was a hot topic a while back for designers and engineers. The fact is just because the hype has passed, doesn’t make it any less important of a question aspiring technology workers should be asking themselves. The world we live in is splitting. We've wondered how this dichotomy will work, what it will mean and where will it take us? The Industry has seen it coming for a while—because we're the ones ripping it in half.
On the one side, we have our familiar analog world. The world of chemistry and Newtonian physics. This is the world we all grew up in. In this world, things reliably follow patterns of interactions. Fruit falls to the ground. Fire burns. Seasons change. We can work in this world equally reliably because these patterns are essentially rules for how the world works.
Since we have more than a simple familiarity with these rules, we are able to leverage and manipulate them to our will. We've invented recipes that use controlled heat to bake. We've invented games that use gravity and the laws of physics to play. We've invented furniture that utilizes the natural material qualities to function. We are able to do this because we know the rules.
We're told in our studies to first learn the rules before we break them. By learning the rules, you gain a broader understanding of the subject at hand and why the rule exists. That additional context enables you to make strategic decisions about the rules you break and how you choose to break them.
Obviously, there's some fun in breaking all the rules. Rarely, however, are those explorations meaningfully productive. Let alone pragmatically implementable within a product development sprint.
Even though some of us consider ourselves to have grown up in this digital world it is simply not possible to glean a sufficient working knowledge of how it all works just by existing within it. Therefore to learn about the other half of the dichotomy, you have to learn code.
While analogies such as languages translate well, other qualities, such as the laws of physics, don't. Each piece of hardware can define new means of physical interactions, and they do regularly. In addition, operating systems often act as their own world, defining the what's possible within them. Sometimes capabilities translate to other operating systems, sometimes they don't. Rarely will the code. As for the user, it's up to the software companies to maintain consistency of those interaction patterns between their applications, and to what end are they incentivized?
The simplest question you can ask yourself when deciding whether to code or not is: am I going to create things for our digital world?
If so, having some foundational knowledge about languages, capabilities, technique and technologies would be helpful. To be clear, I'm not saying you need to become a full-stack engineer, constantly trying to stay up on node and ruby, but having a sense of the basics can go a long way.