(Replying to PARENT post)
I don't know what the author is proposing. We don't have enough storage to persist all collision events, that would require zettabytes of disk space. The detectors are bottlenecked to store a few hundred events a second. Therefore they need to filter out the majority of the 40 million collision events per second that occur in the LHC.
Even then, I recall around 1% or so of the stored events were saved via a 'minimal bias' trigger, one that doesn't apply any filter criteria. This was mainly for calibration purposes, and cross checking stimulation data. So we still have petabytes of collision events that didn't have any selection criteria applied.
(Replying to PARENT post)
> From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.
These words show their opinions come from someone that does not know/understand the technology behind these systems and behind the computing power and capabilities at CERN. Of course, they have limitations, but are the kind of limitation that keeps pushing technology forwards. Using quotes on the trigger mechanism is saying how strange those concepts are for them.
The trigger system is a fine tuned filter that allows the electronics to work, otherwise they will be overloaded and will crash as they will be receiving a rate of data that simply can not be handled. How this trigger is being set depends on the specific physical process that is being studied and is supported by theory and simulation, the scope that can be tested is also limited so the selection is highly scrutinized and reviewed
(Replying to PARENT post)
LIGO with gravitational wave astronomy and the Event Horizon telescope with very long baseline interferometry are opening up new ways to observe the universe.
Yes the data is not as abundant as it used to be a few decades ago but that's the nature of the game. Our current models work very well in terms of describing accessible energies. So this is going to take longer and require more and more ingenuity. I don't think the problem here is lack of motivation for getting good answers -- to the contrary, anyone who can discovery something major is going to have a lot of fame and credit come to them.
(Replying to PARENT post)
>But what if scientists could make larger gains by betting
>smartly than they could make by promoting their own
>research? “Who would bet against their career?” I asked
>Robin when we spoke last week.
>
>“You did,” he pointed out.
http://backreaction.blogspot.com/2018/12/dont-ask-what-scien...
(Replying to PARENT post)
The biggest issue with using models as a tool in science is that they start to become unfalsifiable. To avoid the hornet's nest of modern models consider geocentricism - the belief that Earth was uniquely at the center of the solar system, universe, and everything. In times before telescopes this belief was justified by models. If you assume this is true, then you get some really bizarre behavior from the planets that now orbit the Earth. In particular some planets will suddenly stop and start moving the other way, most planets will travel in 'swirly' patterns, and so on. But when you have a model none of this matters. Planets need to go backwards? Sure, why not. They travel in swirlies? Sure, why not.
So you get these increasingly convoluted and complex theories, but in spite of how irrational they seem - they are supported by what we see. But at some point you're going to reach a dead end when the model becomes so intractable that it becomes impossible to juryrig yet another observation into it. And it's only at that point that we start to scratch our head and wonder what's going on. And finding the problem there can be inconceivably difficult because it can be something far more fundamental than you'd ever look for. For instance in a geocentric universe you might search for why planets travel in swirlies. Yet you're at a much higher level than the actual problem - which is that they don't actually travel in swirlies. And in this toy example things are much better than they might be in our reality. There you're only a couple of 'fundamentals' separated from the real problem. With our rapid pace of publication and 'stair stepping', models advance and build upon themselves exponentially more rapidly.
Like a single cog in a clock breaking, all it takes is a single falsehood be assumed as truth in a model to begin to undermine the entire phenomenally complex system.
(Replying to PARENT post)
https://www.overcomingbias.com/2018/12/can-foundational-phys...
>But I have on my blog discussed what I think should be done, eg here:
>http://backreaction.blogspot.com/2017/03/academia-is-fucked-...
>Which is a project I have partly realized, see here
>And in case that isn't enough, I have a 15 page proposal here:
>https://fias.uni-frankfurt.de/~hossi/Physics/PartB2_SciMeter...
(Replying to PARENT post)
Given that there's "not much to do", it's weird that there isn't a 1% group somewhere out there who thinks this is worthwhile.
Context = I studied particle/astro physics 15 years ago and then "dodged the bullet" as the OP nicely puts it. I felt the way we were taught modern phsyics was quite poor/handwavy, esp. once the cross was made to QFT territory, and led to a lot of misunderstanding/confusion, which I still see in comment threads today, incl. between academics (!). Also, when I speak to mathematicians about this, they deeply disapprove of the way physics is taught/run today in this respect, and can routinely point to misunderstandings/confusion that hinders progress.
A good example (but probably too mathy) is the work of Tamas Matolcsi: `Spacetime without reference frames` and `Ordinary thermodynamics`.
https://www.amazon.com/s/ref=dp_byline_sr_book_1?ie=UTF8&tex...