Sunday , 10 November 2024
Home Cryptocurrency AI’s Brave New World: Whatever happened to security? Privacy?
Cryptocurrency

AI’s Brave New World: Whatever happened to security? Privacy?

AI’s Brave New World: Whatever happened to security? Privacy?

The following is a guest post from John deVadoss, Governing Board of the Global Blockchain Business Council in Geneva and co-founder of the InterWork Alliance in Washington, DC.

Last week, I had the opportunity in Washington, DC to present and discuss the implications of AI relating to Security with some members of Congress and their staff.

Generative AI today reminds me of the Internet in the late 80s – fundamental research, latent potential, and academic usage, but it is not yet ready for the public. This time, unfettered vendor ambition, fueled by minor-league venture capital and galvanized by Twitter echo chambers, is fast-tracking AI’s Brave New World.

The so-called “public” foundation models are tainted and inappropriate for consumer and commercial use; privacy abstractions, where they exist, leak like a sieve; security constructs are very much a work in progress, as the attack surface area and the threat vectors are still being understood; and the illusory guardrails, the less that is said about them, the better.

So, how did we end up here? And whatever happened to Security? Privacy?

“Compromised” Foundation Models

The so-called “open” models are anything but open. Different vendors tout their degrees of openness by opening up access to the model weights, or the documentation, or the tests. Still, none of the major vendors provide anything close to the training data sets or their manifests or lineage to be able to replicate and reproduce their models.

This opacity with respect to the training data sets means that if you wish to use one or more of these models, then you, as a consumer or as an organization, do not have any ability to verify or validate the extent of the data pollution with respect to IP, copyrights, etc. as well as potentially illegal content.

Critically, without the manifest of the training data sets, there is no way to verify or validate the non-existent malicious content. Nefarious actors, including state-sponsored actors, plant trojan horse content across the web that the models ingest during their training, leading to unpredictable and potentially malicious side effects at inference time.

Remember, once a model is compromised, there is no way for it to unlearn, the only option is to destroy it.

“Porous” Security

Generative AI models are the ultimate security honeypots as “all” data has been ingested into one container. New classes and categories of attack vectors arise in the era of AI; the industry is yet to come to terms with the implications both with respect to securing these models from cyber threats and, with respect to how these models are used as tools by cyberthreat actors.

Malicious prompt injection techniques may be used to poison the index; data poisoning may be used to corrupt the weights; embedding attacks, including inversion techniques, may be used to pull rich data out of the embeddings; membership inference may be used to determine whether certain data was in the training set, etc., and this is just the tip of the iceberg.

Threat actors may gain access to confidential data via model inversion and programmatic query; they may corrupt or otherwise influence the model’s latent behavior; and, as mentioned earlier, the out-of-control ingestion of data at large leads to the threat of embedded state-sponsored cyber activity via trojan horses and more.

“Leaky” Privacy

AI models are helpful because of the data sets that they are trained on; indiscriminate ingestion of data at scale creates unprecedented privacy risks for the individual and for the public at large. In the era of AI, privacy has become a societal concern; regulations that primarily address individual data rights are inadequate.

Beyond static data, it is imperative that dynamic conversational prompts be treated as IP to be protected and safeguarded. If you are a consumer, engaged in co-creating an artifact with a model, you want your prompts that direct this creative activity not to be used to train the model or otherwise shared with other consumers of the model.

If you are an employee working with a model to deliver business outcomes, your employer expects your prompts to be confidential; further, the prompts and the responses need a secure audit trail in the event of liability issues that surfaced by either party. This is primarily due to the stochastic nature of these models and the variability in their responses over time.

What happens next?

We are dealing with a different kind of technology, unlike any we have seen before in the history of computing, a technology that exhibits emergent, latent behavior at scale; yesterday’s approaches for security, privacy, and confidentiality do not work anymore.

The industry leaders are throwing caution to the winds, leaving regulators and policymakers with no alternative but to step in.

The post AI’s Brave New World: Whatever happened to security? Privacy? appeared first on CryptoSlate.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

SEC Shake-Up: Pressure Builds for Gensler’s Immediate Resignation

The American Securities Association is urging SEC Chair Gary Gensler to resign...

SEC Commissioner Mark Uyeda Backs Trump Pro-Crypto Stance As Gensler’s Reign Nears End

Mark Uyeda, a commissioner at the US Securities and Exchange Commission (SEC),...

Can The Donald Trump Win Drive Bitcoin Price To $170,000 This Cycle?

The victory of Donald Trump in the US presidential election has been...

Crypto Fundraising Soars to $1.76B in October, Highest Since November 2023

Crypto exchanges dominate top deals in the financial sector, including Kraken and...