• 1 Post
  • 10 Comments
Joined 5 years ago
cake
Cake day: July 26th, 2020

help-circle

  • So, your strategy here was to purposefully make a convenient strawman ragebait of “we should ban general computing” – something that has exactly zero to do with the blogpost – and then ask a trick question? Clearly you are not here to have a conversation in good faith. Why bother?

    But please do elaborate what my “stance on general purpose computing” is? Genuinely curious.

    Regarding the question of why LLMs are “somehow a bigger problem”, I never said they are a “bigger problem”. I did not compare them to anything. Comparing LLMs to “general purpose computing” is like comparing hypermarkets to “the market economy” (and just before you go on another red herring quest: I said “market economy” not “capitalism”, that’s a whole different conversation). It makes no sense.

    Hypermarkets are one possible artifact of the market economy, and LLMs are one possible artifact of general purpose computing. That does not change the fact that hypermarkets have huge issues attached to them. Just as LLMs have huge issues attached to them.

    We can have general purpose computing while recognizing issues related to LLMs, just as we can have market economy while recognizing issues with hypermarkets. We can choose to promote or discourage them in our environment. Pointing out issues with them is not the same as calling for them to be banned. This is not at all difficult to grasp for anyone who comes into such conversation in earnest.

    What I wrote about in the blogpost is a particular set of the issues related to LLMs, in the context of a deluge of hype trying to convince us somehow LLMs can break passwords (they can’t), exploit vulnerabilities (they can’t) and autonomously orchestrate cyber-attacks (again, they can’t).

    LLMs add a shit-ton of attack surface due to their complexity, and will end up being a larger security problem than any of the fear-hyped scenarios above. Honestly not sure what’s so controversial here?

    And look, if you don’t like my writing, just stop reading it. It’s really super-easy. There’s plenty of other stuff to read online, you can even use an LLM to generate something you’d like better.






  • Is the stance here that AI is more dangerous than those because of its black box nature, it’s poor guardrails, the fact that it’s a developing technology, or it’s unfettered access?

    All of the above I guess. Although I am not keen on making a comparison to these previous things. I have previously written about how IoT/“Smart” devices are a massive security issue, for example. This is not a competition, the point is not whether or not these tools are worse by some degree from some other problematic technologies, the point is that the AI hype would have you believe they are some end-all demiurgs when the real threat is coming from inside the house.

    Also, do you think that the “popularity” of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn’t seen as dangerous because people are already reliant and Google is a known brand rather than a new “startup”.

    I don’t know about Gemini’s actual popularity. What I do know is that it is being shoved down people’s throats in every possible way.

    My feeling is that a lot of people would prefer to use their tools and devices the way they had before this crap came down the pipeline but they simply don’t know how to turn it off reliably (partially because Google makes it really hard to do so), and so Google gets to make bullish claims on line-going-up as far as “people using Gemini” are concerned.



  • I am not opposed to machine learning as a technology. I use Firefox’s built-in translation as a way to access information online I otherwise would not be able to access, and I think it’s great that small, local model can provide this kind of functionality.

    I am opposed to marketing terms like “AI” – “AI” is a marketing term, there are now toothbrushes with “AI” – and I am opposed to religious pseudo-sciencey bullshit like AGI (here’s Marc Andreessen talking about how “AGI is a search for God”).

    I also see very little use for LLMs. This has been pointed out before, by researchers who got fired for doing so from Google: smaller, more tailored models are going to be better suited for specific tasks than ever-bigger humongous behemoths. The only reason Big Tech is desperately pushing for huge models is because these cannot be run locally, which means they can monopolize them. Firefox’s translation models show what we could have if we went in a different direction.

    I cannot wait for the bubble to burst so that we can start getting the actually interesting tools out of ML-adjacent technologies, instead of the insufferable investor-targeted hype we are getting today. Just as we started getting actually interesting Internet stuff once the Internet bubble popped.