If you visualize the application as a set of asynchronously running machines sending data streams to each other, then clearly it becomes harder to determine the exact time at which things happen. The flip side of this is that you have to decide which time relationships are important, as e.g. relating an event to a screen input, and let the scheduler worry about the rest... provided that certain constraints are met: one is that information packets must not overtake each other in a single connection; the other is that a downstream component sees an information packet after it has left the upstream one.
So suppose component A sends 2 packets 'p' and 'q' on the connection to B, then B sees 'p' after A does, and B sees 'q' after 'p', but the timing relationship between B seeing 'p' and A sending out 'q' is not defined, and is not important! However, observation shows that all packets will be processed by both components, and in the right order; and because the connections are bounded buffers, all packets will be processed completely. Applications built this way usually perform better in elapsed time than conventional one-step-at-a-time applications (at some cost in CPU time), and are much more maintainable. We have run a bank for 30 years on this architecture!