(Replying to PARENT post)
Neural networks are shiny and new, but they are just an implementation for solutions from stats that have been around for decades.
Regression? MSE loss. Now with a neural network trained on MSE loss.
Classification? Logistic regression with cross entropy loss.
Anomaly detection? Feature extraction? Plenty of people still use PCA, which is nothing new. Autoencoders may get you more mileage, but conceptually work very similarly to PCA for these use cases.
Image data? Use methods from signal processing, also decades old. Convolutions are nothing new, you're just now implementing them with neural networks, and adding a loss function based on what you're trying to predict.
Time series data? You could be better off just sticking to ARIMA. Depends on your use case, but using RNNs may not even work here.
Reinforcement learning is more exciting, and is solving new problems that weren't even being approached before. Same goes for GANs, and unsupervised learning in general stays exciting and fresh.
But most of the applications of AI are ho hum. Just use decades old methods, now implemented with neural networks. At least, sometimes. What has really changed is the amount of data available now, and the ability to process it. Not necessarily the approaches to analyzing it.
(Replying to PARENT post)
(Replying to PARENT post)
> Taking averages, grouped by something? That's AI now.
I think that is right. The algorithm that does the grouped averages is machine learning, and if you put error bars around it, it is stats.
To address your concern: I wouldn't worry about the relevance of applying math and logic to the world. It has always been growing.
(Replying to PARENT post)
One thing is AI to the press and public, another thing is AI to investors, yet another thing for nontechnical workers, and not even a single cohesive thing to people building it all. Wherever you personally draw your lines between AI and not AI, the boundaries do keep expanding. Does that mean the bubble is growing? There are undoubtably more people doing machine learning, and there are more people doing statistics, and more people solving optimization problems, and each other thing that we call AI, but the "AI" label is growing faster than all that. It's a weird bubble. If it pops, does that mean there will be fewer jobs for people like me, or does it just mean people will just stop calling it AI? Or is this just a word's meaning changing and not a large bubble?
This story comes to mind: http://web.archive.org/web/20190626012618/https://gen.medium...