研究人员发现,AI神经网络中存在某些因素,在导致数据隐私漏洞的同时也是这些模型性能的关键所在。通过利用这一新发现,研究人员开发出一种能够更好地平衡这些模型的性能与隐私保护的新技术。 该发现涉及保护神经网络免受成员推理攻击(MIAs),这类攻击技术允许攻击者确定特定数据是否被用于训练某个AI模型。 “成员推理攻击会危及训练数据集中个人数据的隐私,”该论文的第一作者、北卡罗来纳州立大学的博士生方星力( ...
研究人员近日识别出大语言模型(LLM)中若干在保障安全响应方面起关键作用的核心组件。基于这一发现,团队进一步开发并验证了一类新的训练方法,可在提升大语言模型安全性的同时,尽量降低所谓的“对齐税”(alignment tax),即在增强模型安全能力的同时 ...
Approximately 3,500 years ago, in the Bronze Age settlement of Cabezo Redondo in present-day Villena, a fire razed dwellings ...
As the electric vehicle (EV) market surges, the biggest anxiety for owners and manufacturers remains the battery. How long ...
Using sintered lunar regolith for heat storage, Harbin Institute of Technology researchers demonstrate how a closed Brayton ...
As the global demand for sustainable energy solutions intensifies, the efficiency of devices like metal-air batteries and ...
A new review highlights how combining multiple ultrasound techniques may help detect and assess portal hypertension in liver ...
By designing a hybrid system with variable-sized neurons, the key problems in the manufacturing process of ODNNs were ...
Vassdalen in Northern Norway on 5 March 1986. The tragedy strengthened collaboration between NGI and the Norwegian Armed ...
This study systematically explores the exfoliation feasibility and optoelectronic properties of 24 types of all-inorganic ...
In an unprecedented field experiment, an international research team led by Goethe University Frankfurt, the University of ...
Researchers at Örebro University have developed a new AI-driven production system that can significantly improve efficiency ...
一些您可能无法访问的结果已被隐去。
显示无法访问的结果