For the last half-century or more, our computers have had pretty much the same classical computing process of zeroes and ones and pretty much the same structure, the von Neumann architecture. For most of this time, that has not just worked well, it has worked better and better. But that era is coming to an end, along with the observation underlying much of that improvement, Moore’s Law.
Practically since the first transistor-based computer, the number of transistors on a chip has doubled every couple of years. That meant the speed and capacity of the computers we use have doubled along with it. But the price of that doubling has grown increasingly dear. It takes more energy and produces more heat to increase computing speed. So alternatives have proliferated as well.