
AI tools now assist with everyday tasks like research, coding, and note-taking. But for Harsh Varshney, a 31-year-old Indian origin Google employee in New York, they also demand strict privacy habits.
“Day-to-day, they help me with deep research, note-taking, coding, and online searches,” Varshney said in a conversation with Business Insider. A former two-year member of Google’s privacy team, he now works on the Chrome AI security team, defending the browser against hackers and AI-driven phishing.
Varshney outlined four personal practices to protect data when using AI.
First, he treats AI like a public postcard. “Sometimes, a false sense of intimacy with AI can lead people to share information online that they never would otherwise,” he said. He skips sharing credit card numbers, Social Security numbers, home addresses, or medical history with public chatbots, as AI might memorize and later expose it to others.
Second, he considers which “room” he is in. Enterprise AI tools, which generally avoid training on user conversations, suit work discussions better. “Think of it like having a conversation in a crowded coffee shop where you could be overheard, versus a confidential meeting in your office that stays within the room,” he explained. Varshney shuns public chatbots for Google projects and favors enterprise tools even for email edits.
Third, he deletes chat history regularly. Even enterprise tools retain past data. “Once, I was surprised that an enterprise Gemini chatbot was able to tell me my exact address, even though I didn’t remember sharing it. It turned out, I had previously asked it to help me refine an email, which included my address,” he said. He opts for “temporary chat” or incognito modes to block storage.
Finally, he sticks to trusted tools like Google’s AI, OpenAI’s ChatGPT, and Anthropic’s Claude, while reviewing privacy settings. “It’s also helpful to review the privacy policies of any tools you use. In the privacy settings, you can also look for a section with the option to ‘improve the model for everyone.’ By making sure that setting is turned off, you’re preventing your conversations from being used for training,” he said.
“AI technology is incredibly powerful, but we must be cautious to ensure our data and identities are safe when we use it,” Varshney added.






