i. Though the level of intelligence we have will continue to grow over time, by definition it will always be incomplete. While it’s useful to focus on external intelligence we must obtain better intelligence on internal environment, focusing on risk factors. Using the LMCO kill-chain model, we can start to map activity to phases. However taking this view from a single “observable” in a network is insufficient because what we really need to know is if that observable be correlated to some activity over time (months to years) to reveal if there is a potential attack in progress. Assuming the adversarial TTPs will adjust as technology and capability improve, then the notion of a “signature” has to evolve to match the kill-chain model, while at the same time consuming and producing threat intelligence. Big data for cyber security will provide the intelligence to make this model successful once we can define a common ontology, allowing long-term “look-back” of user, host, application and intelligence activity. What we think of as signatures today evolve to algorithmic expressions to inform defenders where attention is needed. Architecturally we imagine each location or site hosting a big data platform locally, with the ability to report to a central console for local and multi-location, agency-wide analytics. The resulting data can be securely communicated to our law enforcement and intelligence communities as an opt-in. Why not imagine to extend this to allow those communities to securely query those systems remotely.