A framework to evaluate generative AI projects

A framework to evaluate generative AI projects

A risk-demand framework to evaluate suitability for generative AI approaches, two “buckets” of new generative AI features in Vertex AI.

NPW Research

Revolutionary, yet untested

Microsoft Security Copilot, based on GPT-4 and its own security model, consolidates data streams to provide comprehensive security analysis. It can analyze alerts, create visual representations of network activity and guide potential investigations. It keeps track of users' actions, generates summaries, and produces presentations to update non-security personnel also, such as executives, about ongoing investigations.But this game-changing technology is not without challenges. Adversaries could exploit Security Copilot's vast information access when compromised, potentially poisoning its training models. Trust in AI is crucial, and inaccuracies may deter security analysts from relying on it. So it is still too early - let the copilot establish its credentials first. In NPW Insights this week, read about a framework to choose generative AI projects. The post below for details


Survey findings about adoption of software supply chain practices

Open Source Security Foundation survey on adoption of practices recommended in Supply-chain Levels for Software Artifacts (SLSA) framework correlates with how useful they are perceived to be. While practices like using centralized build service, isolated builds, and ephemeral builds see widespread adoption, provenance generation (required for SLSA Level 1) adoption lags significantly. This is in line with survey respondents questioning its usefulness, and also aligns with findings from another recent publication on SBOM use cases. The implication: explaining usefulness of a practice is crucial to drive its adoption. ANALYSIS

This post is for paying subscribers only

Already have an account? Log in