Yet what is possible in public health is not always so easy in national security.Western intelligence agencies must contend with laws governing how private data may be gathered and used.In its paper, GCHQ says that it will be mindful of systemic bias, such as whether voice-recognition software is more effective with some groups than others,and transparent about margins of error and uncertainty in its algorithms.American spies say, more vaguely, that they will respect "human dignity, rights, and freedoms". These differences may need to be ironed out.One suggestion made by a recent task-force of former American spooks in a report published by the Centre for Strategic and International Studies (CSIS) in Washington was thatthe "Five Eyes" intelligence alliance―America, Australia, Britain, Canada and New Zealand―create a shared cloud server on which to store data.In any case, the constraints facing AI in intelligence are as much practical as ethical.Machine learning is good at spotting patterns―such as distinctive patterns of mobile-phone use―but poor at predicting individual behaviour.That is especially true when data are scarce, as in counterterrorism.Predictive-policing models can crunch data from thousands of burglaries each year. Terrorist attacks are much rarer, and therefore harder to learn from.That rarity creates another problem, familiar to medics pondering mass-screening programmes for rare diseases.