-
Notifications
You must be signed in to change notification settings - Fork 52
Audit log MVP #7339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Audit log MVP #7339
Conversation
ec3d782
to
6f04417
Compare
9258b89
to
1c4e5bf
Compare
let project = | ||
nexus.project_create(&opctx, &new_project.into_inner()).await?; | ||
|
||
let _ = nexus.audit_log_entry_complete(&opctx).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started with project create because it's easy to work with in tests, but I know it's not in the short list of things we want to start with. We might end up simply logging every endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we'll want (at least eventually) to include all (at least all authenticated) API methods. I think if we want to just have a subset of the methods available then we should prioritize those that make changes (vs GET
operations), but with the intention of getting coverage of the API.
Related note, while not a requirement for this initial version, I spoke to @sunshowers about strategies for how we might be able to enforce new methods must implement the audit log. It's a place I think we'd like to get to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's related to dropshot lacking middleware — notice we manually call this instrument_dropshot_handler
thing in every endpoint. I wonder if we could build that in elsewhere, make it automatic, and add the audit log call to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial thoughts on the fields in AuditLogEntry
.
nexus/db-model/src/audit_log.rs
Outdated
// TODO: this isn't in the RFD but it seems nice to have | ||
pub request_uri: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this looks like it might be the closest thing that we'd have to something like a rack and/or fleet ID, which is something I think we'd want - something for customer to be able to filter which audit logs came from which rack / fleet.
This may suffice for now, but maybe just until we get multi-rack implemented?
nexus/db-model/src/audit_log.rs
Outdated
// Fields that are optional because they get filled in after the action completes | ||
/// Time in milliseconds between receiving request and responding | ||
pub duration: Option<TimeDelta>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While fine to include, I don't think this is required, in case that makes it easier. I'm not following the earlier note about this relates to including the response in the audit log entry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just meant the response and the duration are both things we only know at the end of the operation.
nexus/db-model/src/audit_log.rs
Outdated
// TODO: including a real response complicates things | ||
// Response data on success (if applicable) | ||
// pub success_response: Option<Value>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this indeed complicates things, it is critical IMO. For example, if someone were to create a new instance this audit log should say what that new instance ID is as a result.
nexus/db-model/src/audit_log.rs
Outdated
|
||
#[derive(Queryable, Insertable, Selectable, Clone, Debug)] | ||
#[diesel(table_name = audit_log)] | ||
pub struct AuditLogEntry { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking it might make more sense to put operation-specific things like resource_type
, resource_id
and maybe action
into a something like a request_elements: Value
, where the operation can decide what makes to include.
nexus/db-model/src/audit_log.rs
Outdated
|
||
#[derive(Queryable, Insertable, Selectable, Clone, Debug)] | ||
#[diesel(table_name = audit_log)] | ||
pub struct AuditLogEntry { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like for us to include a version format, where we stick to major/minor semver, and include a event_version
in this struct. I'm not sure how we'd want to manage that, and for all I know it might be a little more difficult for fields with Value
type (request and response bits), but I think it's important for us to not silently break user parsers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking we could use the release version, but I see you mean the abstract shape of the log entry, and we'd want the version to stay the same across releases when applicable to indicate that log parsing logic does not have to change. So we should probably include both a log format version and the release version. Semver might be overkill — maybe we can get away with integers and not worry about distinguishing between breaking, semi-breaking, and non-breaking changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking we could use the release version, but I see you mean the abstract shape of the log entry, and we'd want the version to stay the same across releases when applicable to indicate that log parsing logic does not have to change. So we should probably include both a log format version and the release version. Semver might be overkill — maybe we can get away with integers and not worry about distinguishing between breaking, semi-breaking, and non-breaking changes.
The patch number of SemVer might be overkill, but I think following similar rules for Major and Minor versions to differentiate between changes that'd break parsers vs those that shouldn't (e.g. new fields added) could still fit into SemVer rules and be a natural means indicating when parser logic has to change.
Pulling these refactors out of #7339 because they're mechanical and just add noise. The point is to make it a cleaner diff when we add the function calls or wrapper code that creates audit log entries, as well as to clean up the `device_auth` (eliminated) and `console_api` (shrunken substantially) files, which have always been a little out of place. ### Refactors With the change to a trait-based Dropshot API, the already weird `console_api` and `device_auth` modules became even weirder, because the actual endpoint definitions were moved out of those files and into `http_entrypoints.rs`, but they still called functions that lived in the other files. These functions were redundant and had signatures more or less identical to the endpoint handlers. That's the main reason we lose 90 lines here. Before we had ``` http_entrypoints.rs -> console_api/device_auth -> nexus/src/app functions ``` Now we (mostly) cut out the middleman: ``` http_entrypoints.rs -> nexus/src/app functions ``` Some of what was in the middle moved up into the endpoint handlers, some moved "down" into the nexus "service layer" functions. ### One (1) functional change The one functional change is that the console endpoints are all instrumented now.
8dae6b3
to
9d70d86
Compare
f9d36c0
to
f95cb8a
Compare
f95cb8a
to
e1843f8
Compare
752941b
to
866ead3
Compare
b506ec0
to
62c4928
Compare
Initial implementation of RFD 523.
High-level design
audit_log_entry_init
: called before anything else, and if it fails, we bail -- this guarantees nothing can happen without getting loggedaudit_log_entry_complete
: called after the operation succeeds or fails, filling in the row with the success or failure result. Currently we only log the HTTP status code and possibly error message, but we will fill this in further with, e.g., the ID of the created resource (if applicable), and maybe the entire success response./v1/system/audit-log
time_completed
, nottime_started
. This turns out to be very important — see the doc comment onaudit_log_list
innexus/db-queries/src/db/datastore/audit_log.rs
.time_completed
, but not all entries in the audit log table have non-nulltime_completed
Operations logged
See
nexus/src/external_api/http_entrypoints.rs
. My goal was to start by logging the operations that create sessions and tokens. Eventually I think we want to log pretty much everything that's not a GET.login_saml
: last step of SAML login, creates web sessionlogin_local
: username/password login, creates web sessiondevice_auth_confirm
: last step of token createproject_create
andproject_delete
instance_create
andinstance_delete
disk_create
anddisk_delete
Next steps
Things that are not in this PR, but which we will want to do soon, possibly as soon as this release. I put the highest priority items first.
Log ID of created resource
For actions that create a resource, like disk or instance create, we need to at least log the ID of the resource created. Even for token and session creation, we can probably log the ID of the created token or session. We may also want to log names if we have them.
Log display name of user and silo
We only have UUIDs for user and silo and they are not very pleasant to work with. It's a lot easier to see what's going on at a glance if we have display names. On top of that, after a user or silo is deleted, there isn't a way to look them up in the API by ID and get that info.
Auto-complete uncompleted entries
Unlike with initialization (because we bail if it fails), we do not have a guarantee that audit log completion runs successfully because we don't want to turn every loggable operation into a saga to enable rollbacks. To deal with this, we will likely need a background job to complete any rows hanging around uncompleted for longer than N minutes or hours. Because these will not have success or error info about the logged operation, we will probably need an explicit third kind of completed entry, like
success
/error
/timeout
.Log ID of token or session used to authenticate operation
We have these IDs as of #8137, might as well use them.
Versioned log format
We may want to indicate breaking changes to the log format so that customers update whatever system is consuming and storing the log.
Silo-level audit log endpoint
In this PR, the audit log can only be retrieved by fleet viewers at a system-level endpoint. We will probably want to allow silo admins to retrieve an audit log scoped to their silo. That will require
/v1/audit-log
endpoint accessible only to silo admins that does more or less what the system-level one does, pluswhere silo_id = <silo_id>
SiloAuditLog
authz resource alongsideAuditLog
that is tied to a specific siloLog putative user for login operations
For failed login attempts we want to know who they were trying to log in as. For SAML login this may not be meaningful as we only get the request from the IdP after login was successful over there, but for password login we could log the username.
Log full JSON response
We may want to go as far as to log the entire JSON response. One minor difficulty I ran into is that Dropshot handles serializing the response struct to JSON, so we don't have access to the serialized thing in the request handlers. Feels like a shame to serialize it twice, but we might have to if we want to write down the response.
Clean up old entries
Background task to delete entries older than N days, as determined by our as-yet-undetermined our retention policy. We need to keep an eye on how fast the table will grow, but it seems we already have some tables that are quite huge compared to this one and we don't clean them up yet, so I'm not too worried about it. We expect customers will want to frequently fetch the log and save it off-rack, so the retention period probably doesn't need to be very long.
Log a bunch more events
Right now the audit log calls are a bit verbose. Dropshot deliberately does not support middleware, which would let us do this kind of thing automatically outside of the handlers. Finding a more ergonomic and less noisy way of doing the audit logging and latency logging might require a declarative macro.