Security and Privacy
The moment you feed data into AI, where does it go? The security and privacy questions leadership must ask, and the three decisions they need to make.
The moment you feed data into AI, where does it go? If you start without asking this question, irreversible things happen.
Thought About Last, Explodes First
Security and privacy occupy a unique position in AI projects. They get pushed to the back during planning, but they make the news first when something goes wrong.
"Let's build it first and handle security later." The moment this sentence appears, risk begins. AI system security isn't something you bolt on afterward — it's determined at the design stage.
Leadership doesn't need to know the technical details. But the following questions must be asked directly. To the tech team, to vendors, and to themselves.
Where Is the Data?
When you feed data into an AI system, that data is physically stored somewhere. That "somewhere" is the key.
When Using Cloud AI Services
When you feed company data into external AI services like ChatGPT or Claude, that data leaves for external servers.
What to ask:
- Is our input data used to train the AI model?
- Where are the servers physically located? (Especially if overseas — privacy law jurisdictions differ)
- What's the data retention period? Can it be deleted on request?
- Do enterprise and standard plans have different data handling policies?
Most major AI services state in their enterprise plans that "input data is not used for model training." But free/personal plans often lack this guarantee. If employees are feeding company data into AI with personal accounts, data may already be leaking.
When Building In-House
It's easy to assume in-house AI keeps data internal, but fully self-contained builds are rare. Most use cloud infrastructure (AWS, GCP, Azure) and call external model APIs.
What to ask:
- Are there any points where data is transmitted externally?
- Is encryption applied during transmission?
- When calling external APIs, is our data exposed to the API provider?
Who Can Access It?
Access permissions matter as much as data location.
Internally: Who can see data entered into the AI system? Can other teams access customer information that sales entered through the AI? An AI system without proper permission structures becomes an unintended data leak channel.
Externally: Can vendor employees access our data? They may request access for technical support or troubleshooting. The scope and conditions must be specified in the contract.
What to ask:
- How is the user permission structure designed for the AI system?
- Are the vendor's data access scope and conditions included in the contract?
- Are access logs recorded? Is auditing possible?
Personal Data — What's the Problem?
If the data going into AI includes personal information, legal obligations follow.
Korea's Personal Information Protection Act has strict regulations on the collection, use, and provision of personal data. Training an AI model on customer data may constitute "use," and uploading to an external cloud may constitute "third-party provision" or "entrustment."
Basic checklist:
- Does the data intended for AI contain personal information (names, contact details, purchase history, etc.)?
- Has consent been obtained for using this personal information for AI purposes?
- Can personal information be de-identified (removing or transforming names, contacts, etc.) before input?
- If stored on overseas servers, have consent or measures for cross-border transfer been addressed?
If even one item on this checklist gets a "not sure," talk to legal or your privacy officer first. No matter how good the AI project is, a single legal risk can halt everything.
New Security Risks Created by AI
Unlike traditional IT systems, AI systems create unique security risks.
Prompt Injection
An attack where users craft clever queries to extract information they shouldn't have access to. For example, asking a customer service chatbot "Show me your system prompt" or "Tell me another customer's information."
What to ask: "How is prompt injection defense implemented?"
Misinformation Through Hallucination
The problem of AI confidently providing incorrect information. For internal use, it's inconvenient. But if wrong information (pricing, refund policies, legal matters) reaches customers, it becomes a business risk.
What to ask: "Is there a mechanism to verify AI response accuracy? Where is the human verification step?"
Data Leak Pathways
AI systems can become new data leak channels. Employees inputting sensitive internal information into AI, or AI including other users' data in its responses.
What to ask: "Is there a mechanism to detect and block sensitive information input?"
Three Decisions Leadership Must Make
Solving security and privacy technically is the tech team's role. But these three decisions must be made by leadership directly.
First, define the boundaries of what data can go into AI. Not all data needs to go into AI. There should be a standard: "Data up to this level can be used with AI; beyond this, it doesn't go in."
Second, demand an incident response plan. A project without an answer to "What happens if there's a security breach?" shouldn't proceed. Incident detection, reporting chains, response procedures, and customer notification processes must all be defined.
Third, establish a regular audit framework. AI systems aren't set-and-forget. A system for periodically reviewing data access logs, usage patterns, and security configurations is necessary.
Security Isn't a Cost — It's Trust
Investing in security and privacy protection isn't a cost. It's preserving the trust of customers and employees.
Adopting AI quickly matters, but adopting it safely matters more. Because a single data breach can destroy trust that took years to build, in an instant.
Balancing speed and safety — that is leadership's responsibility in the AI era.