Abstract
Advances in AI research have brought increasingly sophisticated capabilities to AI systems and heightened the societal consequences of their use. Researchers and industry professionals have responded by contemplating responsible principles and practices for AI system design. At the same time, defense institutions are contemplating ethical guidelines and requirements for the development and use of AI for warfare. However, varying ethical and procedural approaches to technological development, research emphasis on offensive uses of AI, and lack of appropriate venues for multistakeholder dialogue have led to differing operationalization of responsible AI principles and practices among civilian and defense entities. We argue that the disconnect between civilian and defense responsible development and use practices leads to underutilization of responsible AI research and hinders the implementation of responsible AI principles in both communities. We propose a research roadmap and recommendations for dialogue to increase exchange of responsible AI development and use practices for AI systems between civilian and defense communities. We argue that generating more opportunities for exchange will stimulate global progress in the implementation of responsible AI principles.