Short answer
No AI tool is safe just because it has a nice README. Skills and MCP servers can be well built and still be dangerous in the wrong host, with the wrong auth, or without a realistic review of what they can touch.
The safe path is to verify maintainer identity, runtime permissions, freshness, install method, and client guardrails before enabling the tool.
What usually goes wrong
- The maintainer is unknown, abandoned, or using a throwaway repository.
- The install path pulls code dynamically without pinning a version or release artifact.
- The server requests more access than the task needs, especially shell or broad write access.
- The tool was reviewed months ago, but the repository changed materially since then.
- The client has auto-run enabled and the team mistakes that convenience for trust.
What a safer install looks like
A safer install is boring in the best possible way: the source is identifiable, the install method is explicit, the permissions are understandable, the host shows you when tools are being used, and there is a clear rollback path if something behaves badly.
This is why registries like Aescut matter. They do not make a tool magically safe, but they make the decision observable enough that a team can stop installing from vibes alone.
What to do if you are unsure
- Prefer a known maintainer or an official server published by the underlying vendor.
- Install in the narrowest scope first: workspace before global, read-only before write, manual approvals before auto-run.
- Use the registry data and read the install metadata before you turn the tool on for a whole team.
- If a tool is unreviewed, assume you are doing the security review yourself.
Sources and further reading
Related questions
Security And Trust
What permissions do skills have?
File access, network access, shell execution, and what really determines the blast radius.
Security And Trust
What should I check before installing a skill?
A practical pre-install checklist for skills and MCP servers.
Security And Trust
What is supply chain risk in AI tools?
Why AI tool supply-chain risk is different from ordinary package risk, and what signals matter most.