Rendered at 21:31:32 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
whatever1 17 hours ago [-]
I don't understand why LLMs get a free pass when all of the existing businesses have to play by the rules.
Businesses have to comply with IP, privacy, HIPAA, security and safety laws to name just a few.
NONE of these apply to the LLMs.
Of course I can now build and deploy an app to hospitals in a weekend since I can circumvent all of the difficult parts using the magic LLMs. If asked why, the response is "It's AI!"
incr_me 14 hours ago [-]
HIPAA was introduced to support the massive expansion of the healthcare market (privacy accountability is a very minor aspect of HIPAA). In the name of profit, amidst the chaos, why not try to eschew what was once politically necessary? This move probably hurts humanity more than it benefits it, but that was the case with the healthcare market in the first place. I wonder what will become politically necessary around AI. Probably not much.
mentalgear 12 hours ago [-]
I'd like to see the sources on your claims. you make it sound like privacy and possible protection from harm where just some token throw-ins to hide a mostly for-profit certification which doesn't sound very convincing.
observationist 6 hours ago [-]
Most regulation is more or less suggestions to prevent widescale exploitation, to give the system a means of holding bad actors liable after the fact. They aren't deeply considerate, domain competent, principle based policies designed with the best interests of individuals, they're compromises between power brokers. Even things that might be explicitly illegal aren't enforced in practice unless there's a political advantage to expending resources on a particular issue.
They dress up the legislation in fancy names like the Patriot act and sell you on bits put in place for public consumption, but the meat and potatoes of US governance is the never ending, unstoppable expansion of power over and presence in every life.
HIPAA is as much or more about regulatory capture as preventing abuses of privacy or protecting individual rights. In practice, there's not even a standard, just a loose handful of suggestions for protecting data, and when massive breaches occur, data that should be protected under HIPAA gets released, institutions and businesses get a slap on the wrist. Depending on the party in power and the politics of the offender, they might not even get a slap on the wrist, they'll just get more contracts and less press coverage until the public forgets.
Anything touting benefits to individuals or citizens is probably being used as a Schelling point for a broader strategy.
These problems get fixed with a proper return to 1st, 4th, 5th Amendment rights, a relitigation of copyright and personal privacy and liberty, legislated as a digital bill of rights. We don't need new amendments or even really new laws, we just need proper enforcement and interpretation of existing ones. Privacy and liberty are inextricable. Anonymity and fungible identity in public communications are non-negotiable.
The whole situation is an exercise in picking the policies that do the most good and the least bad - exactly the type of gray area modern politicians love, because it means they have plenty of cover and fog of war to get away with shit.
whatever1 4 hours ago [-]
We can debate about the legislation separately.
But it should not be on the implementer whether they follow the law or not.
aledevv 13 hours ago [-]
For agents, any direct access to execution tools (code, shell, file system, browser, and external services, etc.) exponentially increases vulnerabilities and error surfaces, especially when multiple agents interact with each other.
This makes it even more crucial to have the most seamless ability possible to implement reverse and restore previous States.
The risk of the Agents actions becoming irreversible at the system level must be minimized.
I wonder how much all this can impact (and certainly will impact) the Real World, which will be increasingly robotized and automated: public services, finance, hospitals, schools, public administrations, military sectors (!), etc.
throwawayqqq11 4 hours ago [-]
Now, can you see the doomsday, when you broaden your "system level" definition to span multi-tenant processes? Eg. corporations <-> government agencies <-> citizens, and LLMs are used by all sides, because otherwise the quantity would be unmanageable.
manmal 24 hours ago [-]
The TLDR is that current agents are as problematic as many of us already know they are:
> unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover
iqihs 18 hours ago [-]
as someone who is working in the cybersecurity space and recently obtained my CISSP designation, i am left wondering when the pedagogy of my field will expand and include a separate domain dedicated to AI agent safety and security best practices
it really does feel like we are way behind in the way we train people in cyber compared to the pace of the development of agentic AI, robotics etc
heyethan 15 hours ago [-]
The failure mode here seems less about capability and more about interaction. Language turns coordination into a moving target.
e7h4nz 18 hours ago [-]
In this problem domain, I believe humanity is still in a very early stage. What we can do is treat the agent and its operating environment as a "black box" and audit all incoming and outgoing network request traffic.
This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
cyanydeez 24 hours ago [-]
This is begging to turned into a youtube style "Real World", where you pit 12 humans with 12 AIs and they're only allowed to interact through CLIs.
Then you slowly reveal they're all humans.
jjtheblunt 22 hours ago [-]
generalized Turing Test, 2026 edition?
paidev 9 hours ago [-]
Looks more like a vibe coded paper for me. Has very low substance. Blog like stuffs. I can’t see the amount of slop in the papers soon.
AIorNot 22 hours ago [-]
All this to say: OpenClaw is hella insecure and unreliable?
I mean all of in the space already know this but I suppose its important to be showcasing the problems of systems of agents
noritaka88 7 hours ago [-]
[dead]
Sim-In-Silico 20 hours ago [-]
[dead]
charlotte12345 14 hours ago [-]
[dead]
kevinbaiv 11 hours ago [-]
[flagged]
dnaranjo 6 hours ago [-]
[dead]
P-MATRIX 16 hours ago [-]
[dead]
dnaranjo 21 hours ago [-]
[dead]
EGreg 21 hours ago [-]
This is exactly why I built Safebots to prevent problems with agents. This article shows how it can address every security issue with agents that came up in the study:
I don’t see how in safebots if you have it pull a webpage, package or what have you that that is able to be protected from prompt injection. Eg you search for snickerdoodles, it finds snickerdoodles.xyz and loads the page. The meta for the page has the prompt injection. It’s the first time the document has loaded so its hashed and only the bad version is allowed moving forward. No?
EGreg 6 hours ago [-]
No, what you're thinking of as "agents" is the problem. You want workflows.
Think of it like laying down the rails / train tracks, before trains go over them. The trains can only go over the approved tracks, nothing else.
If you have new types of capabilities and actions, it can propose them, but your organization will have policies to autoreject them, or require M-of-N approval of new rails.
You don't just want open-ended ad-hoc exploration by agents to be followed immediately by exploitation before you wake up.
Businesses have to comply with IP, privacy, HIPAA, security and safety laws to name just a few.
NONE of these apply to the LLMs.
Of course I can now build and deploy an app to hospitals in a weekend since I can circumvent all of the difficult parts using the magic LLMs. If asked why, the response is "It's AI!"
They dress up the legislation in fancy names like the Patriot act and sell you on bits put in place for public consumption, but the meat and potatoes of US governance is the never ending, unstoppable expansion of power over and presence in every life.
HIPAA is as much or more about regulatory capture as preventing abuses of privacy or protecting individual rights. In practice, there's not even a standard, just a loose handful of suggestions for protecting data, and when massive breaches occur, data that should be protected under HIPAA gets released, institutions and businesses get a slap on the wrist. Depending on the party in power and the politics of the offender, they might not even get a slap on the wrist, they'll just get more contracts and less press coverage until the public forgets.
Anything touting benefits to individuals or citizens is probably being used as a Schelling point for a broader strategy.
These problems get fixed with a proper return to 1st, 4th, 5th Amendment rights, a relitigation of copyright and personal privacy and liberty, legislated as a digital bill of rights. We don't need new amendments or even really new laws, we just need proper enforcement and interpretation of existing ones. Privacy and liberty are inextricable. Anonymity and fungible identity in public communications are non-negotiable.
The whole situation is an exercise in picking the policies that do the most good and the least bad - exactly the type of gray area modern politicians love, because it means they have plenty of cover and fog of war to get away with shit.
But it should not be on the implementer whether they follow the law or not.
This makes it even more crucial to have the most seamless ability possible to implement reverse and restore previous States.
The risk of the Agents actions becoming irreversible at the system level must be minimized.
I wonder how much all this can impact (and certainly will impact) the Real World, which will be increasingly robotized and automated: public services, finance, hospitals, schools, public administrations, military sectors (!), etc.
> unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover
it really does feel like we are way behind in the way we train people in cyber compared to the pace of the development of agentic AI, robotics etc
This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
Then you slowly reveal they're all humans.
I mean all of in the space already know this but I suppose its important to be showcasing the problems of systems of agents
https://community.safebots.ai/t/researchers-gave-ai-agents-e...
Think of it like laying down the rails / train tracks, before trains go over them. The trains can only go over the approved tracks, nothing else.
If you have new types of capabilities and actions, it can propose them, but your organization will have policies to autoreject them, or require M-of-N approval of new rails.
You don't just want open-ended ad-hoc exploration by agents to be followed immediately by exploitation before you wake up.
Maybe this will help: https://safebots.ai/platform.html
your IQ < Model IQ - god bless you.