Skip to content

build: specify a step option to avoid being run in parallel #14934

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Tracked by #14647
davidgmbb opened this issue Mar 16, 2023 · 6 comments
Open
Tracked by #14647

build: specify a step option to avoid being run in parallel #14934

davidgmbb opened this issue Mar 16, 2023 · 6 comments
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase. proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. zig build system std.Build, the build runner, `zig build` subcommand, package management
Milestone

Comments

@davidgmbb
Copy link
Contributor

Zig Version

0.11.0-dev.2146+9964f1c16

Steps to Reproduce and Observed Behavior

Currently when running custom steps, all are run at the same time. If have a child process executing QEMU and two instances of QEMU are run at the same time, the second instance would error with something in the lines of:

qemu-system-x86_64: Failed to get "write" lock
Is another process using the image [zig-cache/rise_ReleaseFast_x86_64_rise_bios_test.hdd]?

I suppose the Zig user (the programmer of such project) could handle this, but I think it would be very tedious to do.

Expected Behavior

One should be able to specify for a particular step not to be run in parallel, but one at a time. Another reason why this would be annoying even if it was possible to do it is that you get logs mixed.

@davidgmbb davidgmbb added the bug Observed behavior contradicts documented or intended behavior label Mar 16, 2023
@InKryption
Copy link
Contributor

A possible work around comes to mind: would making one of the qemu steps depend on the other work?

@davidgmbb
Copy link
Contributor Author

Currently my test_all step is a dummy step depending on all the others (which are custom too). I could have a test_all step written, which probably is the way to go. Just to give a quick feel of how I have structured my run steps right now after the build parallel changes fixes. Probably there is a better way and I also want to have the core members' opinion on how to best structure this and what direction they are moving on, so I can stick to the best code according to my needs. The code is not published yet, but if needed I can push it.

const RunSteps = struct {
    configuration: Configuration,
    run: Step,
    debug: Step,
    gdb_script: Step,
    disk_image_builder: *RunStep,

    fn getGDBScriptPath(configuration: Configuration) ![]const u8 {
        return try std.mem.concat(b.allocator, u8, &.{ "zig-cache/gdb_script_", @tagName(configuration.bootloader), "_", @tagName(configuration.architecture), "_", @tagName(configuration.boot_protocol) });
    }

    const RunStepsSetup = struct {
        cpu_driver: *CompileStep,
        bootloader_step: ?*CompileStep,
        disk_image_builder: *CompileStep,
        user_modules: []const *CompileStep,
        disk_image_builder_run: *RunStep,
    };

    fn createStep(run_steps: *RunSteps, comptime step: []const u8, setup: RunStepsSetup, suffix: []const u8) !void {
        if (setup.bootloader_step) |bs| {
            @field(run_steps, step).dependOn(&bs.step);
            setup.disk_image_builder_run.step.dependOn(&bs.step);
        }

        for (setup.user_modules) |user_module| {
            @field(run_steps, step).dependOn(&user_module.step);
            setup.disk_image_builder_run.step.dependOn(&user_module.step);
        }

        @field(run_steps, step).dependOn(&setup.cpu_driver.step);
        setup.disk_image_builder_run.step.dependOn(&setup.cpu_driver.step);

        @field(run_steps, step).dependOn(&setup.disk_image_builder.step);
        @field(run_steps, step).dependOn(&setup.disk_image_builder_run.step);

        const final_step = b.step(try std.mem.concat(b.allocator, u8, &.{ @tagName(setup.cpu_driver.kind), "_", suffix }), "Run the operating system through an emulator");
        final_step.dependOn(&@field(run_steps, step));
    }

    const RunError = error{
        failure,
    };

    fn run(step: *Step, progress_node: *std.Progress.Node) !void {
        _ = progress_node;
        const run_steps = @fieldParentPtr(RunSteps, "run", step);

        const is_debug = false;
        const is_test = run_steps.configuration.executable_kind == .@"test";
        const arguments = try qemuCommon(run_steps, .{ .is_debug = is_debug, .is_test = is_test });

        var process = std.ChildProcess.init(arguments.list.items, b.allocator);

        switch (try process.spawnAndWait()) {
            .Exited => |exit_code| {
                if (exit_code & 1 == 0) {
                    return RunError.failure;
                }

                const mask = common.maxInt(@TypeOf(exit_code)) - 1;
                const masked_exit_code = exit_code & mask;

                if (masked_exit_code == 0) {
                    return RunError.failure;
                }

                const qemu_exit_code = @intToEnum(common.QEMU.ExitCode, masked_exit_code >> 1);

                if (qemu_exit_code != .success) {
                    return RunError.failure;
                }
            },
            else => return RunError.failure,
        }
    }

    fn debug(step: *Step, progress_node: *std.Progress.Node) !void {
        _ = progress_node;
        const run_steps = @fieldParentPtr(RunSteps, "debug", step);
        const is_debug = true;
        const is_test = run_steps.configuration.executable_kind == .@"test";
        var arguments = try qemuCommon(run_steps, .{ .is_debug = is_debug, .is_test = is_test });

        if (!(run_steps.configuration.execution_type == .accelerated or (arguments.config.virtualize orelse false))) {
            try arguments.list.append("-S");
        }

        try arguments.list.append("-s");

        const debugger_process_arguments = switch (common.os) {
            .linux => .{ "kitty", "gdb", "-x", try getGDBScriptPath(run_steps.configuration) },
            else => return Error.not_implemented,
        };

        var debugger_process = std.ChildProcess.init(&debugger_process_arguments, b.allocator);
        _ = try debugger_process.spawn();

        var qemu_process = std.ChildProcess.init(arguments.list.items, b.allocator);
        _ = try qemu_process.spawnAndWait();
    }

    fn gdbScript(step: *Step, progress_node: *std.Progress.Node) !void {
        _ = progress_node;
        const run_steps = @fieldParentPtr(RunSteps, "gdb_script", step);

        var gdb_script_buffer = std.ArrayList(u8).init(b.allocator);
        const architecture = run_steps.configuration.architecture;
        common.log.debug("Architecture: {}", .{architecture});
        switch (architecture) {
            .x86_64 => try gdb_script_buffer.appendSlice("set disassembly-flavor intel\n"),
            else => return Error.architecture_not_supported,
        }

        try gdb_script_buffer.appendSlice(try std.mem.concat(b.allocator, u8, &.{ "symbol-file zig-cache/cpu_driver_", try Suffix.cpu_driver.fromConfiguration(b.allocator, run_steps.configuration, null), "\n" }));
        try gdb_script_buffer.appendSlice("target remote localhost:1234\n");

        const base_gdb_script = try std.fs.cwd().readFileAlloc(b.allocator, "config/gdb_script", common.maxInt(usize));
        try gdb_script_buffer.appendSlice(base_gdb_script);

        try std.fs.cwd().writeFile(try getGDBScriptPath(run_steps.configuration), gdb_script_buffer.items);
    }

    const QEMUOptions = packed struct {
        is_test: bool,
        is_debug: bool,
    };

    fn qemuCommon(run_steps: *RunSteps, qemu_options: QEMUOptions) !struct { config: Arguments, list: std.ArrayList([]const u8) } {
        const config_file = try std.fs.cwd().readFileAlloc(b.allocator, try std.mem.concat(b.allocator, u8, &.{"config/" ++ @tagName(run_steps.configuration.execution_environment) ++ ".json"}), common.maxInt(usize));
        var token_stream = std.json.TokenStream.init(config_file);
        const arguments = try std.json.parse(Arguments, &token_stream, .{ .allocator = b.allocator });

        var argument_list = std.ArrayList([]const u8).init(b.allocator);

        try argument_list.append(try std.mem.concat(b.allocator, u8, &.{ "qemu-system-", @tagName(run_steps.configuration.architecture) }));

        if (qemu_options.is_test and !qemu_options.is_debug) {
            try argument_list.appendSlice(&.{ "-device", b.fmt("isa-debug-exit,iobase=0x{x:0>2},iosize=0x{x:0>2}", .{ common.QEMU.isa_debug_exit.io_base, common.QEMU.isa_debug_exit.io_size }) });
        }

        switch (run_steps.configuration.boot_protocol) {
            .uefi => try argument_list.appendSlice(&.{ "-bios", "tools/OVMF_CODE-pure-efi.fd" }),
            else => {},
        }

        var test_configuration = run_steps.configuration;
        test_configuration.executable_kind = .@"test";

        const image_config = try common.ImageConfig.get(b.allocator, common.ImageConfig.default_path);
        const disk_image_path = try std.mem.concat(b.allocator, u8, &.{ "zig-cache/", image_config.image_name, try Suffix.image.fromConfiguration(b.allocator, if (qemu_options.is_test) test_configuration else run_steps.configuration, "_"), ".hdd" });
        try argument_list.appendSlice(&.{ "-drive", b.fmt("file={s},index=0,media=disk,format=raw", .{disk_image_path}) });

        try argument_list.append("-no-reboot");

        if (!qemu_options.is_test) {
            try argument_list.append("-no-shutdown");
        }

        if (ci) {
            try argument_list.appendSlice(&.{ "-display", "none" });
        }

        //if (arguments.vga) |vga| {
        //try argument_list.append("-vga");
        //try argument_list.append(@tagName(vga));
        //}

        if (arguments.smp) |smp| {
            try argument_list.append("-smp");
            const smp_string = b.fmt("{}", .{smp});
            try argument_list.append(smp_string);
        }

        if (arguments.debugcon) |debugcon| {
            try argument_list.append("-debugcon");
            try argument_list.append(@tagName(debugcon));
        }

        if (arguments.memory) |memory| {
            try argument_list.append("-m");
            const memory_argument = b.fmt("{}{c}", .{ memory.amount, @as(u8, switch (memory.unit) {
                .kilobyte => 'K',
                .megabyte => 'M',
                .gigabyte => 'G',
                else => @panic("Unit too big"),
            }) });
            try argument_list.append(memory_argument);
        }

        if (canVirtualizeWithQEMU(run_steps.configuration.architecture) and (run_steps.configuration.execution_type == .accelerated or (arguments.virtualize orelse false))) {
            try argument_list.appendSlice(&.{
                "-accel",
                switch (common.os) {
                    .windows => "whpx",
                    .linux => "kvm",
                    .macos => "hvf",
                    else => @compileError("OS not supported"),
                },
                "-cpu",
                "host",
            });
        } else {
            // switch (common.cpu.arch) {
            //     .x86_64 => try argument_list.appendSlice(&.{ "-cpu", "qemu64,level=11,+x2apic" }),
            //     else => return Error.architecture_not_supported,
            // }

            if (arguments.trace) |tracees| {
                for (tracees) |tracee| {
                    const tracee_slice = b.fmt("-{s}*", .{tracee});
                    try argument_list.append("-trace");
                    try argument_list.append(tracee_slice);
                }
            }

            if (arguments.log) |log_configuration| {
                var log_what = std.ArrayList(u8).init(b.allocator);

                if (log_configuration.guest_errors) try log_what.appendSlice("guest_errors,");
                if (log_configuration.interrupts) try log_what.appendSlice("int,");
                if (!ci and log_configuration.assembly) try log_what.appendSlice("in_asm,");

                if (log_what.items.len > 0) {
                    // Delete the last comma
                    _ = log_what.pop();

                    try argument_list.append("-d");
                    try argument_list.append(log_what.items);

                    if (log_configuration.interrupts) {
                        try argument_list.appendSlice(&.{ "-machine", "smm=off" });
                    }
                }

                if (log_configuration.file) |log_file| {
                    try argument_list.append("-D");
                    try argument_list.append(log_file);
                }
            }
        }

        return .{ .config = arguments, .list = argument_list };
    }

    const Arguments = struct {
        const VGA = enum {
            std,
            cirrus,
            vmware,
            qxl,
            xenfb,
            tcx,
            cg3,
            virtio,
            none,
        };
        memory: ?struct {
            amount: u64,
            unit: common.SizeUnit,
        },
        virtualize: ?bool,
        vga: ?VGA,
        smp: ?usize,
        debugcon: ?enum {
            stdio,
        },
        log: ?struct {
            file: ?[]const u8,
            guest_errors: bool,
            assembly: bool,
            interrupts: bool,
        },
        trace: ?[]const []const u8,
    };
};

@davidgmbb
Copy link
Contributor Author

This is the build.zig before the build parallel PR was merged:
https://github.com/davidgm94/rise/blob/caf3b9d9d21d869faa75c10f5b5cc8efbe4bc7f5/build.zig

@mitchellh
Copy link
Contributor

I support this request and you likely already thought of this, but maybe introducing a [hopefully temporary] global mutex on this step for now.

@davidgmbb
Copy link
Contributor Author

Thanks, yes, probably that's the best temporary solution :)

@andrewrk andrewrk added zig build system std.Build, the build runner, `zig build` subcommand, package management enhancement Solving this issue will likely involve adding new logic or components to the codebase. labels Mar 17, 2023
@andrewrk andrewrk added this to the 0.11.0 milestone Mar 17, 2023
@andrewrk andrewrk added proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. zig build system std.Build, the build runner, `zig build` subcommand, package management and removed bug Observed behavior contradicts documented or intended behavior zig build system std.Build, the build runner, `zig build` subcommand, package management labels Jul 20, 2023
@andrewrk andrewrk modified the milestones: 0.11.0, 0.13.0 Jul 20, 2023
@Sahnvour
Copy link
Contributor

I think instead of forbidding the step to run in parallel with any other steps, we should model exclusive access to resources :

  • known to the build system, for example a LazyPath to your qemu disk file
  • external to the build system, for example "calling this program that can have only one instance running at any time"

That way you could still have other steps that don't need the disk file running at the same time.
The scheduler already has a similar concept for memory use, where steps specify the amount of memory they use and possibly prevents others from starting while there isn't enough for everybody.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase. proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. zig build system std.Build, the build runner, `zig build` subcommand, package management
Projects
None yet
Development

No branches or pull requests

5 participants