-
Notifications
You must be signed in to change notification settings - Fork 29
Establish a clear vision for Rust 2018 #24
Comments
If I'm correct that's 12 weeks or so from now right? That's probably not a lot of time to tackle large things, but perhaps there's some smaller things we could do? If we draw some parallelisms with the CLI WG; from my experience what ended up working well for us was to get authors of widely-used libraries involved in the communication channels. But also figuring out which features might be important to have for a "perfect" CLI experience, and building out a bunch of libraries to move towards that. The leads of the larger CLI frameworks have kept on pushing their work (yay for I would see a similar approach perhaps working well for the Net WG! It seems there's a handful of people leading some important, large efforts that will require some time to complete. But I reckon we could probably get a smaller group of people involved to identify speed bumps and write small, focused libraries help solve those. This is probably a rather modest approach to rust 2018; but I feel with the time available the biggest win we can get is probably by focusing on small projects we can see through from start to finish! Hope this is somewhat useful! Thanks! Resources
|
Thanks @yoshuawuyts, that seems like a useful framing for sure! We'll cover this in detail in the kickoff meeting, but one thing I wanted to mention is that the Core Team is now planning to do an extended beta of the Edition (to be announced tomorrow), which means that we have about 19 weeks -- definitely enough time to make some significant progress, especially given that we have some full-timers we can allocate here. Anyway, the kickoff meeting is where we'll really start to hash this out, and I'll post notes here afterwards! |
In the kickoff meeting, I proposed the following as a minimum bar for Rust 2018:
The above is fairly generic. I want to hear from you what seems important and feasible to tackle on the Rust 2018 timeframe! |
@aturon That all sounds great. Here is where my disconnect is though. I feel like we are missing a recap of where we are at vs where we would like to be. Futures vs futures-preview, the entire tokio stack, tower, futures-await, etc are all part of this story but it would be great to hear what the overall plan is for all of these pieces and where the team feels like they will fit in (or if they are to be deprecated). We could certainly go write lots of documentation and examples from the current state of things but my understanding is we are not close to done with the async story, rather we are just getting started as async/await becomes a core concept in the language. With that being said, some thoughts, concerns, and questions in no particular order:
In my mind we need a lot more documentation and examples around core futures/async/await before we can provide guidance on higher layers. All that being said I generally agree with you assessment, I would like to see the async book/guide you started completed. It looked like it would be extremely valuable to the community. |
I agree with @aajtodd and share some of the same trouble. I love the goal of setting up guidelines for how to do it "right" when it comes to implementing network services, in terms of standard Rust systems to use, as well as in terms of libraries and middleware. Providing a good documentation there, some solid examples, as well as, ideally, a flexible ecosystem for the common use-cases, all sound great. But the state in which Rust will be when the 2018 Edition ships will still be very much in flux when it comes to core components of this vision (specifically around async/await). Because of this, it seems challenging to provide all of this in a way that is usable with stable Rust. I also can't advocate for us setting up all of these guidelines and recommendations against nightly, both because then they'd be subject to change (or worse, would make it harder to change things that would need changing, like what happened with tokio to some extend), and because I think that it sends the wrong message (being forced to use nightly to do a lot of things is something that makes it harder for me to propose Rust as a solution). While I understand and agree with the value of pushing as much things as possible out of the door with the 2018 Edition marketing operation, either of these options (nightly-only guidelines, or soon-to-be-deprecated ones) are, in my opinion, more trouble than value. Because Rust is there to stay, and because its value proposition around network services especially is so great, I think that we could, and should, let that deadline slip. If we're able to articulate a great set of tools, guidelines, libraries, middlewares, and books to document it all, all of on stable, production-ready Rust, I think that the value will speak for itself. It might also just be enough content to do a separate, more focused marketing push at that point (although I will admit that I don't know how much effort and budget are involved there). Mayhap, in the meantime and to provide something for the 2018 Edition push, we could deliver some kind of preview of this work, mainly aimed at acquiring early adopters and contributors, who could help us achieve the loftier, longer-term goal. Thoughts? |
@tynril I'm actually ok with working on and pushing nightly if the expectation is it will be stabilized sometime in the 2018 Edition. One could argue we already have solutions that work on stable (e.g. existing tokio, futures, etc). I would much rather be working towards a vision of what we want it to look like than "wasting" time on something that will be deprecated or OBE. What I do want is clarification on which to focus on as immediate goals. Just my 2 cents. |
A small, but valuable, point to cover would be deployments, especially deployment using containers/Docker. This documentation could cover:
I've been referring to Tõnis Tiigi' Advanced multi-stage build patterns recently. It has a lot of valuable points. |
@aajtodd Agreed, I believe that a longer term solution, targeting nightly at the time the 2018 Edition ships (in December), is probably best. There are downsides to this, though:
I think that this is worth it anyway, as it allows us to think slightly longer term (at a time when async/await is stable). I'm interested in the core team's perspective on this. @aturon? |
I personally think that we should focus on solutions that work on stable. Here's why: async/await is unstable for a reason: It is not done yet. The syntax and the APIs are subject to change. Documenting it for everyone and presenting it as a "preview" is the same as stabilizing it prematurely. We don't want that. We need it to be unstable for a few more months because there are many areas that we haven't explored yet. There is for example a disagreement about the async function syntax itself. The only way to settle it, is to implement the not yet implemented parts and to actually try out which way works better. Also, I believe that when we stabilize, the essential parts of the ecosystem, in particular Tokio, should run on 0.3 without a compatibility layer. We need the experience from migrating these libraries to check that the futures 0.3 API isn't lacking in certain areas. Fully implementing async/await in the compiler and fully migrating some essential parts of the ecosystem over to 0.3 to make sure our APIs are good will take months, but I think it's the only way to ship async/await with confidence. That said, I think we should devote a section on the website to explain which areas are being worked on, how to get involved and where to find updates to these efforts. |
Ah, I should add: It's a long time until December and it might very well be that we've finalized the syntax and API by then and it just hasn't made it into the stable build yet. That could be. We'll know whether it's realistic once we're closer to stabilization. |
i feel that we have to work on nightly for the forseeable future (or at least, i want to work on stabilizing nightly features and i want more people to use nightly to test things out)- while stabilization is important, i think having lots of eyes on the ecosystem before stabilization is necessary - writing docs and trying to integrate the unstable features into existing libraries is the best form of testing we have. we definitely should take our time, but this is the most crucial time for getting this stuff right. i would love a relatively firm deadline to get "final decisions" on this stuff started, and to assert that we have a plan and this stuff will not be unstable forever. being able to say, at rust 2018's release, that we have a final plan on stabilizing the specific parts of the async ecosystem that must be implemented in the complier / std would be a great goal to aim for. of course, this mostly applies to the async ecosystem. for sync networking libraries, or documentation on guidance and best practices, theres a lot of stuff that doesn't require nightly and can be worked on. "network services" is a very broad topic, and there's a lot of room for many different approaches within this group to improve working with network services in rust. |
Great questions/comments! In this reply, I'll focus on the specific question of expectations around the fundamentals of async for 2018. I'm not trying to draw out any particular conclusions, but just to lay out the "ground facts" as I see them:
|
Riffing on the ideas in this thread, plus those from the companion thread on WG structure, I want to make a somewhat more concrete strawman proposal for a Rust 2018 vision.
Of course, there's plenty that's still vague about the above, but note that each of the three goals includes one or more tangible "end artifacts": either a book, a specific ecosystem state, or both. What's left open for debate is mostly the scope of bullets 2-3. If we go in a direction like this, I imagine forming more focused subgroups for each of the three overall goals, and each subgroup working to set out scope/milestones toward the Rust 2018 release. All told, I feel like the above 3 goals would put us in a very strong position come Rust 2018, even if some portions of the story were not yet fully stable. (See my earlier post for more on that.) And I think, with the around 40 people that are part of this WG -- and several working part- or full-time -- that we can probably pull something like this off. What do you think? |
i think these are solid goals. for number 3 specifically, we should work with the existing ecosystem as much as possible, and see what existing, community managed frameworks are out there that we could build upon. i don't want to start yet another web framework without checking out what's available in the broader rust ecosystem. i feel like pushing ahead on a new one without talking to the maintainers of existing libs would imply that libraries that didn't originate close to the "official rust project" will never be accepted as useful or important. (for the record, i don't think this is true, but i definitely feel how people could be slighted when they feel things that they worked on got pushed aside simply b/c they never got looked at) |
I heard on twitter that @seanmonstar is working on a web framework (https://twitter.com/seanmonstar/status/1019633735925297152?s=21). |
Building on what @tinaun said about looking at existing libraries/frameworks, we could potentially look into helping update something like gotham.rs. In fact gotham recently posted about looking for new maintainers due to the original maintainers needing time away from coding. They also mentioned the amount of flux in the futures ecosystem as part of a reason for pausing development on the framework, see this post. Perhaps we could talk to maintainers of the various frameworks like gotham and see about migrating code to the new futures experience and see what shakes out from that? This could also lead to posts about migrating code using old versions of the futures libraries etc. |
@tinaun Thanks for raising these issues! Let me clarify what I had in mind. First, I think there's some context I failed to make clear. Projects like QuiCLI and Flask at least originated purely as an effort to put together existing ecosystem components into a simple package. Even though Flask has grown a lot by now, it still very much takes an attitude of "use the ecosystem, don't build it anew". So the focus I had in mind was almost entirely on exploring, documenting, improving and/or creating reuable components in the web space -- things like the url and http crates, which provide a shared foundation for the web ecosystem. Targeting building a Flask-like "micro-framework" was intended as a way to guide these efforts toward a concrete goal. This would be more like an example framework, heavily documented and easy to build on, much like how Flask started. The focus is on the components. More broadly, a couple of notes about web frameworks in general:
Does that all make sense? |
Oh, one additional follow-up: for sure a big part of the work around "web foundations" should be building bridges with framework authors, both as a way to help find opportunities for factoring out common components and to help fully grasp the design space. And of course to find ways to contribute back, be it through docs, code or otherwise. To pick just one example: routing. That's an area with a lot of variance with web frameworks in general, and Rust is no exception. There are approaches base on sophisticaed proc macros, simpler macros, HLists, untyped hashmaps, and Serde, to name a few. One thing that's missing in all this is a birds-eye view of the design space, and community exploration of the tradeoffs/alternative ideas. I think it'd be really helpful to do this kind of surveying in detail, and make sense of what we can collectively learn from the existing framework approaches. |
One thing I'd like to add is that with the componentized framework which is being proposed should have one obvious way of doing something. I say this because it will make it obvious for newcomers to rust/web development, in general, to just pick up what is recommended and go with it. And it will also make sure the whole situation doesn't devolve into a case analysis paralysis. For example, for handling forms the community recommends WTForms and for working with relational databases, SQLalchemy. As a side note, I just came across Explore Flask and feel like this is exactly what the idea of a central repository of information on the rust web ecosystem can look like. The book uses Flask as it's base but highlights what packages work best with it and how to use them. This is what we've been discussing here and on this issue, right? |
Thanks @bIgBV -- this is definitely an issue we'll have to sort through broadly, and it's one the Rust community has struggled with in general. |
In the interest of generating more discussion ahead of this week's meeting: please pick apart my strawman and follow-up!
|
While high-level work and server frameworks are certainly interesting, I'd be a whole lot more excited about comparably simple tasks, like UDP and TCP servers and clients, both with high scalability but also for embedded systems. Both of which is needed for IoT and kind of a weak spot in any other ecosystem right now, especially the latter, and hence a great chance for Rust to gain huge traction. |
@therealprof Interesting! I wonder if you could make this a bit more concrete -- when you say "UDP and TCP servers and clients", what do you have in mind? Where is the ecosystem/documentation/... falling short today? And can you say more about the market opportunity you see? |
Note: @seanmonstar announced his new framework, warp, today! |
I think these are good goals, and I appreciate that we're targeting nightly (with the understanding that, hopefully, a lot of core things will be "de facto stable" by then, and that some other things might have to change in-flight). I'm curious as if there is a way for us to provide guidelines for the implementation of various "pieces" of networking software, such as (in no specific order):
While I think we should refrain from endorsing any specific solution in these spaces (or worse, making a new one), there might be a way to define some set of guidelines, patterns, or, ideally, traits, which could be implemented by the different providers in order to provide inter-operability and composability. I like this idea, because it can enable the creation of awesome crates within more specific domains, that could then interact with others more or less seamlessly. I think that this is the value frameworks (as opposed to smaller, more specific librairies) provide, but I would like to believe that a monolithic approach isn't the only one. Edit: After catching up with my notifications, it seems like this could be something similar to what #34 is proposing. |
@aturon I'd be happy to elaborate in a few days when I have a real keyboard and non pigeon based internet available. :-) |
@aturon I can talk a little bit about what @therealprof is probably referencing in terms of market opportunity. I've recently been investigating a lot of the current IoT targeted OS's/frameworks, and honestly the entire landscape seems pretty dismal (as an embedded developer that has significant experience in both the .NET and Rust ecosystems). There's a few things common to all the C based frameworks:
And overall, just writing C for a lot of IoT use cases makes for some really annoyingly verbose code, e.g. doing parsing and serialisation for endpoints with |
@aturon @Nemo157 already provided some nice insight into the IoT landscape. Let me try to expand on IoT stuff before getting back to the other points: In todays world when you want to implement an IoT device these are your options:
All of these solutions are way from ideal, the "easy" ways (1. and 2.) are always a massive waste of device resources, the "hard" way (3.) OTOH a waste of human resources. And that does not even consider code quality, security, safety and upgradeability; which are the reason why IoT nowadays is more the "Internet of shitty Devices". This presents a opportunity for Rust because not only do we have the safety features of the language, we also have the size advantage and the ability to use the big pool of functionality from Back to simple UDP/TCP servers: I have yet to find simple examples of how to implement something trivial like a UDP echo server, a TCP traffic generator/sink, a non-buffering TCP echo and all of that without using synchronisation primitives. |
Following today's meeting (that I was unable to attend) I'm somewhat concerned that selected motivating cases of embedded and web almost entirely fail to encompass the cases that interest me:
Working in these spaces I've encountered some specific pain points that may be attributable to the web focus of the current stack:
It's not clear to me that the chosen focus will lead to the WG addressing these sort of issues, or avoid introducing similar new ones. While embedded is sufficiently far removed from web that it will undoubtedly help avoid over-specializing to some regard, it in turn still seems distant from the above cases. QUIC, at least, arguably has a place under the web umbrella, motivated as it is by the demand for a successor to HTTP-over-TCP. I suspect this is not the sort of usecase people had in mind for that WG, though. |
I'm also interested in non-web servers, specifically around games (that's my main Rust use case professionally). I do believe that for most "backend"-style game servers (i.e. servers that aren't doing real time replication), a lot of ground can be covered under the web umbrella, even if the transport protocol doesn't end up being HTTP. For real time replication, I'm unsure as to how much work could really be taken on by this WG, as it seems quite specific and narrow. There is definitely space for innovation there (RakNet isn't getting much activity even with Oculus' acquisition, ENet isn't active either, Ice is GPL or requires a commercial license, etc.). But is this something that a WG should take on? Is there much to standardize, above the deepest async layers that are core language features? |
@Ralith It sounds like we share pretty much the same concerns. I'm not a big fan of the "solve a problem synchronously, then fire off enough threads to make it non-blocking"-methodology; it doesn't work at all well on embedded and typically yields sub-par results on servers. A much better better way to handle concurrency is event based processing which is what we're getting at in Rust so I'm very stoked about that. For me the focus on embedded means to ensure that the toolset works and scales from OS-less MCUs to high end network serving since scaling up is much easier (Java ME embedded anyone?). I very much agree about about your web concern: Everyone is doing "web" now so it's not only well covered from all angles but there's also lot's of competition. However the real grunt work that enables all the webby things happens behind the curtains, is much less competitive and I don't see that we have a whole lot to offer there. I'd rather have the message say: "Rust enables easy and safe networking, from embedded to server, and BTW: we do Web, too", then "Rust also does Web". |
Echoing @therealprof's comments about the need for better support for event-driven / async embedded applications. My current area of focus is robotics, which covers bare-metal devices to autonomous vehicles to factories with hundreds or thousands of networked devices. Rust is potentially a great fit for this entire class of applications because of the requirements for performance and safety, but it is still fairly clumsy for developing event-driven, single-threaded, message-passing real-time applications. This type of application architecture is important because it makes safety and worst-case performance analysis tractable and testing and simulation much simpler. Adding threads explodes the number of combinations that you must analyze and test. The problem isn't so much building individual event loops from scratch; Mio is a pretty good foundation to build on, and building state machines using pattern matching and/or session types is actually pretty fun. What's harder is getting multiple event loops to work together and working with networking-related crates not designed with async in mind - the vast majority. If someone has implemented a network protocol (or something using sockets internally, like libusb) in a blocking style, alternatives are to run the protocol handler in a thread and add a messaging layer or to reimplement the protocol. In the C world, the approach is to provide callback-based APIs for this purpose, which works but is hugely unsafe. In Rust it is much more natural to poll for events and then pattern match, which requires that the caller and callee have a way to share a higher-level scheduler, i.e. The latest version of Futures seems to be getting closer to what I'd want, but still seems to be a bit high level for my needs. Many of the concepts match up (Executors, Tasks and Futures) but it's not really clear how I would integrate a local thread pool with an event loop; documentation and examples might help, but I get the feeling that what I want is really buried inside the internal implementation of the local thread pool. In an ideal world, the Rust async story is compelling enough that authors naturally build crates as async and then build a blocking API on top. It then becomes equally easy to build event-driven / state machine servers, async-await / futures servers, or thread-pool based servers on top depending on application requirements. |
Thanks everybody for the discussion! I've now updated the README to reflect the structure and goals we settled on last week, together with elaborated summaries and vision statements from each subgroup. Closing this out as resolved! |
This WG hasn't managed to take off, largely because the leads have been heads-down trying to get futures 0.3 and async/await working. With an alpha nearly out the door, we want to try to get this broader group going again, and see what we can accomplish by the Rust 2018 release date (December 6th).
The text was updated successfully, but these errors were encountered: