• 0 Posts
  • 738 Comments
Joined 6 months ago
cake
Cake day: June 4th, 2025

help-circle







  • it’s basically just pattern recognition

    Only of a very specific kind.

    Something computers are really good at.

    They’re good at recognizing the patterns they’re programmed to recognize. That tells you nothing of the significance of a pattern, its impact if detected, or what the statistical error rates are of the detection algorithm and its input data. All of those are critical to making real-life decisions. So is explainabiliy, which existing AI systems don’t do very well. At least Anthropic recognize that as an important research topic. OpenAI seems more concerned with monetizing what they already have.

    For something safety-critical, you can monitor critical parameters in the system’s state space and alert if they go (or are likely to go) out of safe bounds. You can also model the likely effects of corrective actions. Neither of those requires any kind of AI, though you might feed ML output into your effects model(s) when constructing them. Generally speaking, if lives or health are on the line, you’re going to want something more deterministic than AI to be driving your decisions. There’s probably already enough fuzz due to the use of ensemble modeling.

    What computers are really good at is aggregating large volumes of data from multiple sensors, running statistical calculations on that data, transforming it into something a person can visualise, and providing decision aids to help the operators understand the consequences of potential corrective actions. But modeling the consequences depends on how well you’ve modeled the system, and AIs are not good at constructing those models. That still relies on humans, working according to some brutally strict methodologies.

    Source: I’ve written large amounts of safety-critical code and have architected several safety-critical systems that have run well. There are some interesting opportunities for more use of ML in my field. But in this space, I wouldn’t touch LLMs with a barge pole. LLM-land is Marlboro country. Anyone telling you differently is running a con.



  • a million still undeniably makes you wealthy even today

    Not really, no. Look at this: https://www.annuity.org/annuities/rates/

    An annuity is a good estimate of the lowest-risk rate of return on a given sum. You can only do better by increasing your risk profile. If you need a reliable source of income over the long-term, increasing the risk is a bad idea.

    Assuming the annuity is tax-free (a big if), your rate of return on a million would be $60k/year. But that’s ignoring the fact that you need to live somewhere. Assuming you live in a non-shithole urban area, you’re not going to find much that’s under $500k that’s worth living in. If you use $500k on property instead of taking on a mortgage, you’re down to an annuity of $30k. And a sizeable chunk of that will go on property tax and health insurance.

    $30k is nice, but not life-changing. Having $1M in net worth is somewhere in the middle of middle-class. It’ll help you retire without having to live on pet food, but you’d still better be maxed out on your Social Security contributions if you want to live semi-comfortably.

    Source: I’m retiring soon and have done lots of planning, and have had my plans reviewed and validated by financial planners. It takes more than $1M in net assets to live in a decent house, have some discretionary income, and keep some reserves to cover possible long-term care, car replacement and other contingencies.

    Having said all that, you’ll never hear me whine about my financial situation. I’m in my present bourgeois position due to a combination of hard work, deferred gratification and luck, and few of my peers have been as fortunate. An unexpected layoff, a divorce, or a health crisis in our family, could have left me far worse off.