(Replying to PARENT post)
There's initial value from training yourself on what something looks/feels like โฆ but diminishing returns after that. Whether there is more value to be found doesn't seem to matter.
Factories would sensor up, go nuts with data, find one or two major insight, tire of data, and then just continue operating how they were before โฆ but with a few new operational tools in their quiver.
Same is true of fitness trackers: you excitedly get one, learn how much you really are sitting(!), adjust your patterns, time passes โฆ then one day you realize you haven't put it on for a week. It stays in the drawer.
Not unless they're threatened with ruin will people make changes to the standard way of doing things. This is actually โฆ not bad! Continuity is important, and this is kind of a subconscious gating function to prevent deviation from a proven way of working. So, the change has to be so compelling or so pressing that they're forced to. Not a bad thing.
While we think things change overnight in this world, they generally take awhile โฆ stay patient โฆ it's worth it.
(Replying to PARENT post)
"Mate, we don't need a chip to tell us the soil's dry"
(Replying to PARENT post)
Having tons of data is a Good Thing, so long as you can afford the marginal cost of gathering and managing all that data so that it's ready at hand when you need it later.
It's how you use the data that makes all the difference. If you're facing an issue you don't understand at all, don't go digging for random correlations in your mountain of data to find an explanation.
Think like a scientist: you need a valid hypothesis first! Once you have a hypothesis about what your issue might plausibly be, then you make a prediction: "If I'm right, I suspect our Foobar data will show very low values of Xyzzy around 3AM every weekday night". Only then do you go look at that specific data to confirm or refute the hypothesis. If you don't get a confirmation, you need to go back to hypothesizing and predicting before you look again. You can't prove causation by merely correlating data.
(Replying to PARENT post)
You don't need a chip to tell you that the soil is dry, but if you can use that chip to regulate drip irrigation that can apply substantially different flow to different plants, then you can get a not-too-much, not-too-little watering even if you have a big variation in conditions.
You don't need a big analysis to acknowledge that everybody knows that a particular competitor has lower or higher prices and adjust your pricing; but doing that continuously on a per-product basis does require data and analysis.
(Replying to PARENT post)
(Replying to PARENT post)
1. Keep the raw full data for short period of time, at most 1 month.
2. Downsample what you need for longer period of time (5-10% of the full data).
3. Aggregate your metrics on a yearly basis to save money and compute costs.
(Replying to PARENT post)
So hyperspectral, like big data, is useful up front. But in the end, much simpler tools and algorithms will solve the problem on a continuing basis.
(Replying to PARENT post)
I spent a number of exciting year developing a high frequency soil impedance scanner and finally understood why I was doing it. To confirm the obvious :)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
If you want Toyota style continuous improvement you would need to improve in new areas of the process / new metrics, most of the time?
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
The problem is that they don't stay geek-crazy?
(Replying to PARENT post)
Most of what we as an industry are able to tell growers is stuff they already know or suspect. There is the occasional suprise or "Aha" moment where some correlation becomes apparent, but the thing about these is that once they've been observed and understood, the value of ongoing observation drops rapidly.
A great example of this is soil moisture sensors. Every farmer that puts these in goes geek-crazy for the first year or so. It's so cool to see charts that illustrate the effect of their irrigation efforts. They may even learn a little and make some adjustments. But once those adjustments and knowledge have been applied, it's not like they really need this ongoing telementry as much anymore. They'll check periodically (maybe) to continue to validate their new assumptions, but 3 years later, the probes are often forgotten and left to rot, or reduced in count.