New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
notary + tuf proposal #38
Conversation
@endophage the deck is at https://goo.gl/moEKQp but I did not find any recording (not sure it is recorded), nor meeting notes (it seems there are meeting notes up to June 6th meeting). |
@chanezon the @cncf/toc will review the project backlog next meeting (Tuesday the 11th) and make the formal decisions on project proposal invitations, thanks for getting an early start! |
Good news, after today's TOC meeting the @cncf/toc decided to formally invite Notary/TUF to become a CNCF inception level project. I ask the TOC and wider CNCF community to give this proposal some RFC love before we put it for a formal vote in a couple of weeks. |
|
||
*Sponsor / Advisor from TOC:* Solomon Hykes | ||
|
||
*Preferred maturity level:* incubating |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we will go forward with this project at the "inception" level per @cncf/toc discussion today
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@monadic can you confirm that you want this in as inception or incubating? I may have misheard you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
waiting on confirmation to change this (or not)
Hey @cncf/toc, as we meet tomorrow, can I request you review this proposal and make any comments necessary, along with any requested due diligence. |
After today's TOC meeting, we did a final call for RFC from the TOC and wider CNCF community: https://lists.cncf.io/pipermail/cncf-toc/2017-August/001047.html If there's no strong objections, we will call for a formal vote towards the end of next week. |
@@ -0,0 +1,84 @@ | |||
== Notary & TUF Proposal | |||
|
|||
*Name of project:* Notary & TUF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason that both projects are being proposed together? I appreciate that Notary is a very widely used implementation (because it is what Docker uses), but proposing a specification and an implementation in one go doesn't sound right to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cyphar Is there another widely used implementation of TUF?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tor uses it for their update system for the TBB (from memory they were the first non-academic users). I also believe that Pythons pip ecosystem also uses TUF (or there was some planned integration at some point). Here's the list that TUF maintains: https://theupdateframework.github.io/#integrations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But most importantly, there already is an upstream Python TUF implementation that was written by the TUF designers. It's likely not as widely used as Notary clients (since there are a large number of Docker users), but that would mean that CNCF would be getting two different TUF implementations in one proposal (as well as the spec).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hadn't seen a switch, I thought TOR still used Thandy, on which TUF is based. Python does not use TUF. There are 2 proposed PEPs (458 & 480) and we've talked to the maintainer of pip, they're not going to get around to it for a while.
The submission of the spec is by the TUF designers at NYU. As far as their Python implementation, GRPC has integrations with many languages and that seems to be nothing but a good thing. These are not competing implementations, they are complementary. One should ultimately be able to publish using the Python code, and consume using the Go code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, okay I wasn't aware that Python hadn't switched. I have actually brought up TUF to the zypper folks (our package manager), so don't think I'm not interested in it as a project. 😸
GRPC has integrations with many languages and that seems to be nothing but a good thing.
That's not the same thing though. gRPC's multiple integrations is because in order to use gRPC with a language you have to generate client/server code from the schema. You don't need to do that with TUF, and as you said the publishing and verification can be done by either implementation.
This situation would be more like wanting to include the DBus "specification" as well as two different dbus implementations in one proposal. If the key benefit or differentiator is the language they're implemented in, I'm struggling to see why you need both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I look at it differently. We could simply tell everyone "GRPC compiles protos to C, you can work out how to integrate that in to your language" but we don't because nobody would use it. One can publish a request with a GRPC python client and consume it with a GRPC Go server. There's no actual need to generate code for GRPC, it's a productivity multiplier that we can do so. It's similarly a productivity multiplier if one wants to integrate with TUF and finds out there's already a canonical library in the desired language.
Notary allows one to integrate with TUF using Go. It provides a library and a convenience CLI, along with some server applications that simplify management of a TUF repo. The python implementation allows one to integrate with TUF using Python, it provides only a library and no CLI (beyond the python interpreter), or server applications. I'm hopeful other language integrations appear and can be folded in to the family.
|
||
The Update Framework (TUF) is a specification designed to solve specifically provenance and trust problems as part of a larger distribution framework. | ||
|
||
Notary is a content signing framework implementing the TUF specification in the Go language. The project provides both a client, and a pair of server applications to host signed metadata and perform limited online signing functions. It is the de facto image signing framework in use by Docker, Quay, VMWare, and others. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the de facto image signing framework in use by Docker, Quay, VMWare, and others.
Isn't that because Docker only supports Notary, and so Quay/VMWare have to use it? Or are they using it in another capacity I'm not aware of?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The integration of notary is used to map a human readable name to a sha256 digest in a secure and verifiable way. While not directly integrated, it's possible for anyone else to write a tool that does a similar conversion and use it against both the docker CLI, e.g. docker pull me/my_image@sha256:...
, or the docker daemon API.
I fervently hope that Quay uses it because 1) it's in Go and that suits them, and 2) it's the best signing framework available and there's no point in duplicating work. @ecordell as our maintainer from Quay, any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The integration of notary is used to map a human readable name to a sha256 digest in a secure and verifiable way.
Right, but I think you're missing my point. Other people have tried to include alternative proposals (Red Hat comes immediately to mind) and they haven't been accepted for a variety of reasons -- instead they've been implemented as wrappers around Docker's tooling (such as Project Atomic). The only supported way of cryptographically signing image identities in Docker is with Notary, and thus anyone who wants to support secure registries must use Notary.
Whether you could in theory add other implementations is not relevant to the discussion of whether the statement
It is the de facto image signing framework in use by Docker, Quay, VMWare, and others
Is a testament to the popularity of Notary, or a testament to the necessity to use it due to the popularity of Docker. This is yet another reason why I asked for examples of Notary use outside of Docker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Quay is a good example of how Notary is flexible rather than single-purpose. We're using Notary as a library, wrapping it and extending it to suit our needs. Over time I think Notary could become configurable enough that the wrapping isn't necessary at all, but that's somewhat orthogonal.
Although it's true that we're still signing container tags the same as DockerHub, this is an artifact of wanting to be compatible with the docker client first, not because Notary is forcing our hand. We've considered signing different metadata that our Quay-specific client quayctl (you've heard if it, right?!) would understand but decided to focus on what existing tooling (Docker client) understands for now.
For what it's worth, when we starting looking into using TUF at Quay, we decided to use Notary despite Quay being a Python codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another use case we've dug up, CloudFlare's PAL tool uses notary for container identity, allowing one to associate metadata such as secrets to running containers in a verifiable manner https://blog.cloudflare.com/pal-a-container-identity-bootstrapping-tool/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally, LinuxKit is using Notary to distribute its kernels: http://mobyproject.org/blog/2017/06/26/sign-all-the-things/
proposals/notary + tuf.adoc
Outdated
|
||
*Statement on alignment with CNCF mission:* | ||
|
||
Notary is the most secure and widely adopted implementation of The Update Framework to date, and represents a critical security building block for ensuring the provenance and integrity of data in the field of cloud-native computing. As an implementer of The Update Framework it can provide its guarantees over any arbitrary digital content, making it ultimately flexible to any use case requiring security guarantees against attacks up to and including nation state level. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notary is the most secure [...] implementation of The Update Framework to date
Is there some report on this I can read, which compares said implementations? Some quick searching didn't turn up anything, aside from a 2015 NCC Group audit that found a critical security issue in Notary (it's great that an audit was conducted and only a few issues were found, I'm just confused about this seemingly unsubstantiated comparative claim).
As an implementer of The Update Framework it can provide its guarantees over any arbitrary digital content
Is it currently being used for that purpose anywhere? I am aware of TUF and how it works, but to my knowledge the only major user of Notary is the Docker ecosystem for image signing. While that is definitely a large user-base, and I am aware that Notary can be used with arbitrary data, I would like to see other users of Notary to be convinced that it really is a generic project that can clearly benefit cloud-computing outside of the world of Docker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a comparative analysis for you, I can say that Notary has added security features that other TUF implementations do not, such as Yubikey integration.
As far as how generic it is, notary itself has been maintained as a vanilla TUF implementation. You'll find nothing in its code that ties it to container images or any other specific type of target. As noted above, Notary and the Python TUF implementations interoperate. I hope we can agree that TUF is generic and has use cases outside docker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a comparative analysis for you
So then do you agree that the statement about it being "the most secure TUF implementation" is not substantiated? Do you mind removing it or reworking this paragraph then?
As far as how generic it is, notary itself has been maintained as a vanilla TUF implementation. You'll find nothing in its code that ties it to container images or any other specific type of target.
I believe you when you tell me it's generic enough to not be tied to container images, but I'm asking for an example of it being used in that way. The reason I'm asking is because it's current main use (and all of its documentation that I could skim through describes) container image signing. This is a topic that is next on the list of things to be discussed within the OCI, and if Notary's only real use at the moment is image signing it feels too early to include it (given that there are many other ways that image signing is done by AppC/Flatpak/etc).
As noted above, Notary and the Python TUF implementations interoperate.
I am still struggling to understand why both projects are being proposed at the same time. TUF has a reference implementation (which is being pushed as part of this proposal), and an alternate implementation (Notary) is being pushed as well. They both interoperate, so only one implementation is required to live in CNCF and surely any other implementations can be maintained separately.
More importantly, I think that discussions about including TUF itself should be taken separately to discussions about including Notary. For example: At the moment, I am not convinced about the inclusion of Notary (for the reasons discussed) but I am fairly happy with including TUF. If we proceed to a vote, I would be forced to vote against this proposal because I am not yet convinced about including Notary (despite the fact I would want TUF to be included). I don't think I have voting rights, but I would be surprised if no TOB member felt this way (or even the inverse).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree that there seem to be more benefits to having two separate proposals (one for TUF, and another for Notary), which could be separately considered and voted upon, than to combining them into a single proposal to be accepted or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal of submitting Notary and TUF together is to provide a complete solution. Taking one and not the other means CNCF contains only half a solution. Schedule A of the CNCF charter states:
When successful, the cloud native computing foundation will establish:
- Standardized interfaces between subsystems.
- A standard systems architecture describing the relationship between parts
- At least one standard reference implementation of each sub-system.
- Thinking about adding extensible architecture that end users can extend, replace or change behavior in every layer of the stack for their purposes.
The TUF spec alone, or Notary alone, does not achieve these 4 points. Together they do. If complementary implementations are creating confusion, it would be more logical to drop the Python implementation from the proposal and keep Notary, the implementation people are already using in the cloud native ecosystem. All due credit to Justin Cappos et al. for the Python implementation, but it requires an expert understanding of TUF to use and in a real world use case, does not provide any distribution functionality for TUF data, requiring the user to configure and manage their own distribution mechanism at a much lower level than Notary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@endophage the quoted text isn't implying that every project must achieve all of those points, but rather that's the desired success criteria for the CNCF as a whole. Indeed if you consider the projects already accepted you'll see they fulfil these to varying degrees. So I don't think that's a necessary justification for these landing together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having two independent interoperable implementations is a requirement for most standards processes, and I see having both Notary and the Python implementation as well as the specification together being a very important advantage in making sure that the implementations and standard are correctly implemented and unambiguously specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compared to the other project proposals I have reviewed this seems to be light on details and specifics. Maybe everything is in the presentation but it would be nice to see more detail in this proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @bradtopol. There is a lot more information in the PR discussion now than in the proposal itself. It would be useful to add information about features, attack vectors, use cases, why a joint submission, and answers to questions about important scenarios, which weren't captured by the list of high-level use cases ("container-image signing" isn't very enlightening), such as image update streams, image mirroring, and air-gapped environments.
I don't think we need to include the comparison table and the TUF vs. GPG section doesn't mention any important advantages of TUF over GPG with respect to the most important attack vectors, so I'd just leave that out, also. The big difference as I see it is in the objective: securing an update stream vs. relatively static/independent objects.
proposals/notary + tuf.adoc
Outdated
*License:* | ||
|
||
* Notary: Apache 2.0 | ||
* TUF: MIT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any plans to change this license, or has the MIT license been approved by the GB? I don't think we could take this without an explicit patent license.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is now dual licensed MIT+Apache as of theupdateframework/python-tuf#482 - @endophage can you update?
*Source control repositories:* | ||
|
||
* https://github.com/docker/notary | ||
* https://github.com/theupdateframework/tuf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anything in theupdateframework github org that would not be included?
|
||
*External Dependencies:* | ||
|
||
* https://github.com/docker/notary/blob/master/vendor.conf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@caniszczyk Have the licenses of dependencies been checked?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bgrant0607 all were fine, the only issue that came up is what hit containerd, we have to get them to switch to another TOML library: notaryproject/notary#1210
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It also contains a fork of go-tuf, which isn't in the vendor directory:
https://github.com/docker/notary/tree/master/tuf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some advice on that would actually be useful. That's not a fork of go-tuf in any reasonable sense. It did start as a fork 2.5 years ago, but we've added many features, re-written most of the packages, and re-architected near enough the entire system. We submitted a couple of minor pieces back to go-tuf early on but have since diverged to the point nothing can be usefully submitted back.
Is there some point at which a fork is no longer a fork or are we stuck in a "my fathers axe" type situation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://en.wikipedia.org/wiki/Ship_of_Theseus
Good question, but probably at least as much a legal one as a technical one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The complicated part is that every incremental change of any given project is a derived work, and BSD-3-Clause (unlike MIT/X11) does not permit sublicensing. You can license your changes under another (compatible) license, but derived works are always a fun legal topic. IANAL, and I'd recommend asking one.
I asked this question in the email thread, but I'll ask it a different way here. Let's say I'm a software provider. I want to build an image, sign it, push it to an image registry, have it automatically mirrored to a number of other registries in different clouds, and enable users to cache it in on-prem enterprise registries of their choice. A vulnerability scanning service might also want to sign it with the additional property that it had been scanned for vulnerabilities on a particular date. I would like my customers/users to be able to verify trust independently from the image distribution mechanisms. How could I do that with Notary? |
@bgrant0607 delegations are the most straightforward way to allow a 3rd party such as a scanning service to not only sign the image, but attach a checksum of their vulnerability report to the image, and have that relationship of image -> report be signed and verifiable (they could include the entire vuln report in the TUF data, thought I wouldn't advise that simply due to the bloating it would cause). Mirroring has historically been more tricky as names have meaning and therefore, among other reasons, we tied a notary repository to a single image name. The tricky part came from the fact the image name and location were intertwined. Containerd has recently solved this problem and now decouples image name from location. There is still a little work to be done but this will in the near term make mirroring a case of copying the TUF files from one server to another. A more interesting case is mirroring in to an airgapped environment, where updated timestamps cannot necessarily be fetched as they expire. The most robust solution to this is to re-sign mirrored data with ones own keys, while it is also entirely possible to maintain the original targets signatures for verification of the external provenance. |
Final RFC call from the @cncf/toc in preparation for the TOC meeting on Aug 15th |
This might ultimately be an issue of branding, but as things stand I do share some of the mentioned concerns about this being a joint proposal. It'd be one thing if Notary was positioned as a generic TUF implementation, but all of the documentation and usage notes still read as if it's only for image signing (as cyphar pointed out here), even though it might be useable for other purposes. Furthermore, the TUF project itself still talks about its goal being to build a reusable library for other software systems, yet there's no linking between the two projects that I can see. Crazy thought: what about moving the go-tuf implementation inside the TUF project? Failing that, could we see some more illustrations of how Notary can be used more broadly? |
TUF is definitely considering multiple client implementations, and has an effort to help align their compatibility: theupdateframework/taps#36 |
Another line of questioning. AIUI, TUF was intended/designed to secure the software update process: How does this map to container images? Using only image tags, there is no clear distinction between a specific image and a stream of similar images. For anyone wanting continuous updates, the tag is often used to represent the stream, or update channel. The consumer of the image typically needs control over when updates are deployed (e.g., via rolling update and/or staged deployment pipeline). One way to do that is to resolve the tag to a digest at deployment time rather than runtime, validate the digest's signature, and then pull by digest. Operators would like to monitor properties of the images deployed (e.g., was previously verified, is / is not out of date), but would not typically immediately stop executing images in active service. That would need to be a higher-level policy decision. |
@jonboulle would it make sense for there to be a single GH org that all the TUF projects can co-exist under? It will require some reorganization but then we could have a repo for the spec and TAPs, a separate repo for the python implementation, and a repo for notary? Additional language implementations would then get a repo under that org? |
@bgrant0607 We agree on resolving a tag to a digest at deployment time and this is a use case notary capably fulfills. This is a low level construct and as with any platform for managing systems, we have built a higher level abstraction in Docker Swarm. We would like to replicate the same functionality to K8s. A Swarm Service resolves tags to digests at deployment time and maintains that service definition until an explicit update is issued. During that update, one of the possible operations is to update the image itself, using notary to resolve a tag to a new digest. If the tag -> digest mapping changes, the service is not stopped. It will continue using the digest that was pinned at deployment time until it is ordered to update. By comparing the active service definition to the current state of the notary repo, it is possible to message things such as the running containers being based on an out of date image. |
For posterity, some questions were also answered in this email thread: |
|
||
The Update Framework (TUF) is a specification designed to solve specifically provenance and trust problems as part of a larger distribution framework. | ||
|
||
Notary is a content signing framework implementing the TUF specification in the Go language. The project provides both a client, and a pair of server applications to host signed metadata and perform limited online signing functions. It is the de facto image signing framework in use by Docker, Quay, VMWare, and others. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This thread briefly discusses GPG for signing DEB packages and APT repos (and VMWare Harbour for docker images). The benefits of Notary over those are listed as revocation, freshness, delegation to alternate signers, and painless key compromise recovery. It would be good to get a bit more balanced detail on this (relative strengths, weaknesses, similarities and differences) vs these and other alternatives (Thandy, Web of Trust etc).
Apologies for the long silence, I've been battling priorities while collecting input from the relevant groups on the submitter end. Collecting the points from the Docker and TUF side, I want to begin by ensuring we’re all talking at the same level of abstraction. We see TUF as the next generation tool for signing collections of digital content. The common use case for this is package management in its various guises, be it debs, rpms, or container images. TUF vs traditional package signing methodsFeatures
Protection from attacksDefinitions taken from the TUF specification:
Traditional package signing methods commonly revolve around GPG signing of various metadata fragments. In a sense, GPG is a primitive used by traditional package signing systems. If there was strong enough desire, GPG could be integrated into any existing TUF implementation as an available signing option. TUF recognizes that the existing signing systems have not gone far enough to address the threats that are meaningful in the context of software distribution. It proposes a complete system for secure software distribution that addresses these threats. Over the years many package management signing systems have been developed and they continue to make the mistakes of the past because the community has largely focused on the expertise required to develop crypto primitives, without also acknowledging the expertise required to design systems. To quote Duncan Coutts in his explanation of Haskell’s choice to use TUF, “TUF has been designed by academic experts in the subject, based both on research and existing real-world systems. Our crypto-humility should cover not just crypto algorithms but extend to whole system designs.” TUF vs GPGGPG currently has much greater recognition that TUF. This is expected given the age and lack of competition it has received. This does not automatically make it a good solution to signing requirements. GPG lacks the same features as our “Traditional Package Managers”, as they have largely added very little if anything meaningful on top of the GPG primitives. Nominally one could argue that GPG private key management is simpler than TUF private key management purely on the basis that there are slightly fewer keys to manage. This marginal difference is a poor tradeoff in the face of ease of integration. GPG is well recognized as being difficult to use [1], and even more difficult to integrate with at the library level as a developer [2]. By comparison, one user was able to write a tool to use Notary to sign and verify git tags during a hackathon with no help from the Notary maintainers [3]. Why a joint submission?We want the TUF specification to be accepted into CNCF because it will make a clear statement of the importance and expectations the community must have for the security of their software distribution channels. Furthermore we want there to be implementations in many languages to enable broad adoption. A joint submission of TUF and Notary is a highly cohesive package that lays a solid foundation for package signing in CNCF, providing both the spec for guidance, and an implementation in Golang, which is the majority language among existing CNCF projects. We are at an inflection point in the methods used to develop and deploy software. The paradigm shift happening right now must be capitalized on lest we risk extending the the unacceptable status quo in software distribution security. Use CasesThe most unequivocal use case for TUF and Notary is securing software update systems. This is the stated scope and primary goal of TUF. It is also a stated goal that the framework should be usable with both new and existing software update systems. We should define what we mean by “software update system” in this context: a software update system is a process and utilities that allow one to download and install entirely new software, and upgrades to existing software, within a specific environment. Some examples are Python’s PIP, Debian’s APT, and RedHat’s YUM systems. Container images map very closely to a typical software update system payload. Like some of those mentioned, it uses TAR files containing the collection of files to be installed on the requesting host. It uses a manifest, a JSON file, to describe how those files are used to set up and run the container. The manifest is the root of a Merkle tree, containing the SHA256 checksums of the layers that make up the image. This efficiently allows us to sign only the manifest using Notary and a user can perform a verification of everything they download for the image. We also see a future for Notary and TUF in signing service or pod definitions. This strengthens protections around what software can run on a cluster. We envision a single Notary repository maintained within a cluster to which recognized delegates can push updates. This would be the only mechanism for a cluster to receive updates to its definitions and automatically acts as a second factor of authentication (something you have: the private key) in the presence of traditional username+password based auth. Finally, we recognize that there is a natural link between code identity and container, service, and pod identity. We believe that runtime identity ought to be tied to code signatures, so that policies can be set such that only particular images may assume a runtime identity. For example, a customer might specify that a particular signing process for container images is necessary in order to call particular APIs within a cluster. This link between image identity and container runtime identity requires a cryptographically strong, commonly shared image signing and verification system. Use cases that we consider in scope and that are already implemented or can be accomplished now:
Use cases that are achievable with some additional work:
Out of scope:
|
Thanks a lot for this super detailed comparison @endophage |
Where are we on the following use-cases?
|
The LinuxKit mentions are a bit of a cheat IMO. From memory, LinuxKit distributes all of their components in the Docker registry (with a different structure, and different contents to an image, but that's where it exists). Also I'm not sure what the reason for the comparison tables is -- the only mention of GPG in this thread My main concern was (and still is) that I don't find the reasoning for the joint submission particularly convincing. The language reasoning seems quite odd, because
The OCI still has not come to an agreement (or even had fruitful discussions) on this topic yet, and IMO that is the right place to have that discussion. This sort of language makes me worried that the inclusion of these projects will influence the discussion in the OCI. That was the reason I specifically asked for existing non-Docker usecases. |
@vbatts all good and supported use cases, addressed below:
Notary by default uses an offline cache when an online notary server can't be reached. One can capture the state of the repo directly through HTTP requests or by using the Notary client to pull and verify the current state, then carry those files into the air gap to update its cache. Alternatively and depending on specific requirements, one can set up a notary server within the air gap and import only the delegations from outside, preserving the original signatures on them, while also re-signing timestamps internally. The first option works well when frequent updates are expected, the second option is well suited to situations where updates are infrequent.
Notary permits users to request old copies of the metadata and preserves old versions. The TUF spec actually defines the syntax for referencing a specific version of a file and historically defined how to reference a specific checksum. Notary supports both formats. Notary also has a “changefeed” feature that provides a queryable log of when repos received updates (an update being defined as a new timestamp being published). We need to add documentation for this feature to the Notary repo.
The case you're describing here is exactly the CI pipeline signing use case already included and something Notary supports as demonstrated by its implementation in Docker’s enterprise products. @cyphar LinuxKit uses distribution and notary to distribute kernels that are not run in containers but installed directly on the host. This is a distinct use case. The fact it can be serviced by the same technologies does not make it invalid, it serves to demonstrate the strength and flexibility of those technologies. Can you also help us add a line to the analysis for zypper and the attacks it protects against? |
On Wed, Sep 6, 2017 at 12:05 PM, Aleksa Sarai ***@***.***> wrote:
My main concern was (and still is) that I don't find the reasoning for the
joint submission particularly convincing. The language reasoning seems
quite odd, because notary is not just a library, it's a full application.
I would understand the argument if you wanted to include both a Python
pytuf and a Go go-tuf library.
I don't mean this comment to be taken in more than an advisory way, but
CNCF staff would prefer a single submission, from the perspective that we
already have to spend a ton of time explaining what each of the different
projects do and how they work together (e.g., https://www.cncf.io/projects/).
Now, we already host OpenTracing and (hopefully soon) might host Jaeger as
an implementation. But those projects started (and came to the CNCF)
separately, and there are other (independent) OpenTracing implementations,
like Zipkin.
Also, note that Notary could come in together as a single project, but if
it made sense in the future for it to split into a spec and an
implementation, that could occur with a vote from the TOC.
--
Dan Kohn <mailto:dan@linuxfoundation.org>
Executive Director, Cloud Native Computing Foundation <https://cncf.io/>
tel:+1-415-233-1000 <(415)%20233-1000>
|
Catching up. The "arbitrary installation attack" (e.g., via MITM) is critical for a public repository or marketplace of container images. It seems like the DoS attacks could be handled in other ways. The version-related attacks are tricky with respect to container images. This was touched on in the response to @vbatts and previously touched upon in #38 (comment), but to recap: With Docker images and image registries in particular, there is no clear distinction between a specific image and a stream of related images, nor notion of monotonically ordered versions. For any OTA update or other continuous deployment system, there is generally a way to subscribe to stream of updates, even a choice among channels of update streams, such as stable vs development builds, release 2.x vs 3.x, etc. Examples include https://www.chromium.org/getting-involved/dev-channel, https://docs.docker.com/docker-for-mac/faqs/#stable-and-edge-channels, and https://coreos.com/os/docs/latest/switching-channels.html. It is relatively common to use the tag (e.g., "latest") as a channel. One can then translate the tag to a digest representing the latest image to which the tag refers before deploying that image in a controlled manner (e.g., via rolling update, staged deployment, or blue/green deployment). It sounds like that the recommended way to apply Notary to container images. Though there is no order, no history, and no metadata connecting the digest to the tag, Notary keeps track of that, which enables one to efficiently and authoritatively answer whether an image is currently and/or was ever a valid image on a channel, by querying the history maintained by Notary (correct me if that's wrong). It's a shame that Notary has to make up for deficiencies in the underlying model, but I suppose it needs to keep the history to prevent spoofing of the history. Speaking of history and caching, what storage backends are supported by Notary? It looks like rethinkdb? https://github.com/docker/notary/tree/master/storage Also, for posterity, the answer to my question about mirroring and decoupling source and identity (which could have been listed under use cases / scenarios that need additional work) was here: Runtime application identity indeed is a bigger topic (e.g., aspects are also being worked on by spiffe.io and istio.io). We can discuss the use for application configuration separately. Unlike container images, resources in Kubernetes are expected to be dynamically modified. The pre-deployment application registry case would be similar to the container image case. Also for posterity, it's great to see there is a process for adding maintainers: My concerns have been addressed. Thanks for all the details @endophage. |
Image signing and software management address two different but related problems. Image signing addresses:
Software management provides a way to install and update packages. Notary integrates signing into a solution, leveraging TUF, integrating a signing component at a higher level than package management. The above tables correctly point out that all of the package managers simply provide signing support. This is consistent with a modular Linux design. Why are package managers being compared with container distribution while almost every container in the wild has used one of the package managers listed to install software during build. How can we compare, and by implication, invalidate, package managers while we still rely on them within this model? The above tables are an apples-to-oranges comparison. Notary may make an interesting commercial software solution but the architecture seems too tightly coupled to provide much value to further a container security story that can be applied broadly. |
@endophage I have some comments on your references regarding the problems with GnuPG. The https://blog.filippo.io/giving-up-on-long-term-pgp/ article has been rebutted by Neal Walfield on Smári's rant (https://www.mailpile.is/blog/2014-10-07_Some_Thoughts_on_GnuPG.html) on Mailpile's problems integrating GnuPG spawned a discussion on the gnupg mailing lists. It turned out that the Mailpile hackers did not used the annotated interface (--with-colons, etc), rejected the use of our GPGME library, and had some partly contradiction requirements (e.g. support old gpg 1.4 and modern 2.x). One of the GnuPG hackers then helped them to cleanup some of their problematic code. In the end we integrated and now keep on maintaining a Python binding for GPGME so that writing one's own Python API to gpg can be avoided. |
@endophage https://en.opensuse.org/openSUSE:Standards_Rpm_Metadata suseinfo.xml section for the expiry |
Fair, though determining those delegations is what is sometimes confusion from those observing and even involved in CNCF and the projects being donated (or even growing through incubators). TUF is a standard, and notary is an ideal implementation of that standard. That is not to say that TUF would be better donated to OCI. Or that another alternative like Atomic/Simple Signing would be better in CNCF vs OCI. Though having the combinations of options would cover the use-cases required for those building cloud-native infrastructure |
@vbatts AFAIK, TUF has not been submitted to a standards body. It does aspire to support multiple implementations (e.g., in multiple languages): Given that there are multiple types of relevant artifacts that need to be updated, multiple repository implementations, multiple clients that would want to validate updates, etc., enabling multiple implementations seems like a desirable goal. |
Right, though my point is that it's a spec/standard being donated to a
non-standards body.
…On Tue, Sep 12, 2017, 17:58 Brian Grant ***@***.***> wrote:
@vbatts <https://github.com/vbatts> AFAIK, TUF has not been submitted to
a standards body. It does aspire to support multiple implementations (e.g.,
in multiple languages):
https://github.com/theupdateframework/tuf/blob/develop/docs/tuf-spec.md
Given that there are multiple types of relevant artifacts that need to be
updated, multiple repository implementations, multiple clients that would
want to validate updates, etc., enabling multiple implementations seems
like a desirable goal.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38 (comment)>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/AAEF6colBVR_f6-JCq4bM0HPQffwZ2Owks5shyjBgaJpZM4OQNlN>
.
|
@endophage I just spoke to one of the APT developers while at OSS and it appears that Debian/Ubuntu packaging also solves most of the problems you've marked as "not handled" (similarly to what @msmeissn has been describing for zypper/yumrepo). I would be also shocked to discover that RedHat's dnf and yum do not solve these problems (to the same degree as zypper/apt/etc) as well (but I'm sure @vbatts can ping some RedHat folks that know more than me about it). @bgrant0607 I'm aware that CNCF is not a standards body (though as @vbatts has said, that makes it a bit odd that it is being proposed that we include a spec that is being used as a standard for TUF implementations). But the issue that I'm trying to point to is that by CNCF blessing a particular signing system (which is almost exclusively used for container image signing) this may impact future discussions in the OCI. And as has been outlined above container image signing is still an evolving topic with many different schemes (that all have different trade-offs). |
@msmeissn I've updated the table to break out "YUM (RHEL)" and "YUM (SUSE)" and checked off the appropriate protections for the expiry time and for now checked off mix-and-match in relation to the top level signed repomd.xml. As far as the other protections, I'd like to know where the GPG keys are stored and what the access pattern looks like. Many of the attacks are relevant and applicable even in the case of the signed repomd.xml if access to a single host would allow one to update that file and get it signed. Do you have some documentation on the architecture of the signing and key management systems/process? |
@dankohn I will ping some of the APT folks, and I think @vbatts will ping some of the yum/dnf folks. I have seen the chart, and my reason for mentioning that in the comment above is that I showed the comparison table to one of the APT developers at OSS and they said off-hand that APT does handle more of the points than the ones described. |
Given the amount of noise, this table is not serving the purposes of this proposal. If we address all the possible inaccuracies I still think the comparisons are unhelpful. Can we return to discussing the merits of the proposal? |
@endophage SUSE builds RPM packages and yum repositories (and soon native containers hopefullly) using the openbuild service http://openbuildservice.org/ . The technical setup of such an instance is described here: https://en.opensuse.org/openSUSE:Build_Service_Signer |
While I (in principle) agree with your original point that container image signing and package signing are different, this proposal does not mention container images as the primary use-case of Notary. TUF was originally designed for all software management (including package management). And one of the big claims that TUF makes is that no other update system solves most of problems they solve (and the comparison charts posted confirm that to be the opinion of the TUF developers). So a discussion about "is that statement accurate" is very related to a discussion of the merits of TUF (especially since there are many alternatives in both package distribution and container image signing that do solve many of the same problems, blessing one system really should be done in a far less cavalier way). As for the merits of this proposal -- I still make the claim that submitting both together doesn't make much sense, and that image signing is still not agreed upon in the OCI and I don't want the inclusion of Notary in CNCF to be seen as a "blessing" that effectively coerces OCI to use TUF as the image signing scheme. While you might (rightfully) say that something being in CNCF is not an endorsement, that's not how the community will see it. While I appreciate there exist some PoCs of users of Notary that are not container images (or LinuxKit which is still effectively using the same container image tooling) |
If there's a risk of prejudging activity within OCI, is there then a stronger case for accepting TUF on its own without Notary? I agree with the proposal's criticisms of GPG as an end-user. Writing tooling as simple as trying to verify the download of a binary that works across client platforms is nearly impossible (GPG 2.0 and 2.1 use a totally different data format, and GPGMe is of varying quality across languages, and the behaviour of the API changes in breaking ways across minor releases). TUF has applicability on ensuring the integrity of code/binaries/other data up and down a stack, and in that sense is of greater utility than RPM/APT as specific applications of GPG. From a compliance perspective the promise of consistency at multiple levels is very nice, and would also be a useful addition to discussions re: container identity. I agree however, that the comparison table should be updated to match how the actual package managers in distributions behave to mitigate individual risks. |
FWIW, @randomvariable is wrong in that gpg 2.0 and 2.1 use different data formats - in particular not for verifying signatures. In fact, the standard tool to verify signatures based on a known set of keys has not changed in any incompatible way. May be there is some confusion between gpg and gpgv. The latter is that stripped down tool written ages ago to fulfill the goal of verifying package signatures. AFAIK, it is used by all Linux distros. It is as easy as
The gpg-agent, used to control the private keys, is not used by gpgv. OTOH, for signing gpg-agent allows much better protection of the signing keys and operationd can even be split between two boxes (server, desktop) to avoid the need to download large packages just for signing. |
Sorry @dd9jn, I meant the secret keyring format. I didn't go into details, because wasn't sure of its relevance, but... In this particular instance, I was doing e2e tests to standup CentOS with the ability to sign RPMs, and creating a new keyring from scratch in the process. |
[ From the beginning it has been documented that the only valid way to get the specified OpenPGP data format is by using the --import and --export commands. cat'ing pubring.gpg files worked in some sense but that was using undocumented behaviour ]. I know that people did it anyway, but then don't complain ;-) |
I feel like this is way off topic, but the specific issues were with GnuPG working with non-default keyring directories (hence the issue with file lengths affecting non-Fedora systems) and therefore not being able to trust the API would do the same thing across systems. |
Here are some hints on how the OpenPGP protocol (RFC-4880) can be used For key rollover a mechanism exists in OpenPGP: the subkeys. Most Multiple signature on data are a core feature of OpenPGP and it is I only briefly looked at TUF but it seems to roll its own protocol My suggestion is to avoid the proliferation of new signing protocols |
Yes, that's what I've been suggesting from the very beginning. 😸 |
Thanks to @dankohn for PR'ing the Principles document authored by Brian, Alexis, Ken, and others (I assume that's what the "..." means :-) as I think it lends focus to, and answers some of the questions raised in the discussion here. First, addressing whether TUF is a standard, as has been noted a number of times, no standards body has adopted TUF. As noted in the principles: CNCF may develop written materials in the style of the current CNI interface document, or in the style of an IETF RFC for example. These CNCF “specification” materials are not “standards”. This has always been our expectation of TUF's status if accepted into CNCF. Furthermore, the TAP process and existing versioning of TUF already cleaves well to the principles: In general CNCF specifications will evolve as “living documents” side by side with the CNCF OSS projects that adopt them. ... specifications shall be updated and release versioned according to the conventions of a CNCF open source project. Per the principles doc, our understanding of CNCF is that it's up to the OCI to have its own discussion, and do its own due diligence on projects and decide which should become standards. If OCI comes up with a better solution than TUF, as the Principles note, CNCF is not trying to be a kingmaker and "overlapping projects are OK". In the meantime, TUF is a project that has strong momentum. Beyond the Python and Go (Notary) implementations, purpose built implementations can be found for package management in Rust, Haskell, and OCaml, along with the Uptane project for automotive software updates. Notary is seeing increasing use in the real world through multiple enterprise and open source integrations and has produced a highly attractive platform. It is a particularly relevant implementation of TUF for CNCF because of its open source nature and the prevalence of Go in existing projects. It supports significant features from TUF beyond the flynn/go-tuf implementation and adds meaningful functionality in the form of its Notary Server and Notary Signer services. |
I found the following documents to be helpful in understanding what Notary does and how it works: |
The issue being discussed about whether GPG can provide the same set of security guarantees as TUF seems to me to be missing the point. TUF is a specification that defines a protocol for software updates that gives many security guarantees, with a defined adversary model, based on a set of digital signatures in different roles. GPG is a general purpose signing mechanism, that can be applied to many use cases, which is a very different thing. TUF specifically allows any kind of digital signature to be used, the spec says "The current reference implementation of TUF defines two signing methods, although TUF is not restricted to any particular key signing method, key type, or cryptographic library". GPG could well be used as a signing mechanism for TUF if required. The discussion about how GPG can be used to fulfill the requirements of updates that TUF fulfills is missing the point, as of course it could, should you build up an equivalent security protocol to TUF that covers that set of threats; if you do this formally and with a detailed specification, rather than in an ad hoc way, then you will end up with a signing specification much like TUF. Indeed you may as well use TUF as it already exists to solve this problem, and if using GPG keys is important it is absolutely possible to use them while using TUF. It is clear that many of the Linux vendors have started to construct parts of a protocol set similar to TUF using GPG, but there does not appear to be a formal reviewed specification in the same way as TUF has defined, with detailed security review, at least as far as I can find. The discussion about which boxes in the table should be ticked, and the fact that no one can easily find definitive answers does suggest that the specification is ad hoc rather than formally specified like TUF, or the answers would be much easier to find. |
Thanks @endophage and @justincormack. Good summaries. GPG signatures are simple and familiar, but TUF and Notary address more attack vectors, particularly in the case of continuously updated packages/images. As per OCI Scope table:
We've confirmed with OCI that it does not plan dictate the form of the signature(s) attached. As discussed above, CNCF does not plan to dictate that TUF is the only valid signing method. In particular, I expect GPG to continue to be supported within the Cloud-Native ecosystem. I'm satisfied that additional important common usage scenarios will be addressable with Notary in the not-too-distant future. We're done with technical diligence at this point. |
The Notary/TUF official @cncf/toc vote is out now: https://lists.cncf.io/pipermail/cncf-toc/2017-October/001251.html Thanks everyone. |
Notary and TUF were presented to the TOC on 2017-06-20.
@chanezon do you have the links to the presentation recording?