Software 2.0 is a new name for one of the two major philosophies in computer science. In order to make smart machines, in one class of thought, we tell the machine what to do and in another, we show the machine what to do. Telling the machine what to do eventually led to the rise of rule-based systems and software as we know it today. The second class of thought -showing the machine what to do- eventually led to machine learning or as it is called today, Software 2.0.
With traditional software, everytime an “if” statement is written, the code complexity doubles or in other words, the code complexity grows exponentially with the lines of codes. Software companies try to maintain this complexity by hiring more programmers. But more programmers means exponentially more complexity to the point that the company reaches its maximum hiring capacity and from that point on, almost all the effort goes to maintaining and debugging the code base. Software 2.0 however, learns from its mistakes and does not need programmer’s help to improve. With proper use of Software 2.0, businesses could continue to invest their resources in creativity rather than maintaining and debugging what they have.
Traditional software is incapable of adapting to new environments. So the code had to be written in-house and when it is fully functional, deployed into the environments. Software 2.0 learns from its mistakes. So all you have to do is to define the task, then deploy it and monitor it to see how fast it learns. When the performance is good enough, you can activate your new feature. And if it is not, you can simply define a new task.