Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd really like to see improvements like these: - Some technical proof that data is never read by open ai. - Proof that no logs of my data or derived data is saved. etc...
 help



I don't think this is technically possible without something like homomorphic encryption, which poses too large of a runtime cost for usage in LLMs

They don't even try to proof it another way.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: