To Couple or Not to Couple; That is the Question

I’ve reached another point of contemplation in my efforts to build the greatest media player ever. This is an architectural matter about application development.

The puzzle in question is the use of loosely coupled application architecture versus event-driven application architecture.

Loosely Coupled Applications

In a loosely coupled application, interfaces define contracts for various components of the application. Dependent code is then given (preferably injected with) a reference to an instance of that interface. The implementation may or may not be a singleton, but it does not matter to the dependent code. It has a handle on an instance of the interface it requires to do its job and can execute accordingly.

Event-Driven Applications

At the other end of the spectrum, an event-based application would have one component broadcasting a message while another component listens for that message. The two components know nothing about each other and neither have any direct interaction. In this case, it is up to a messaging bus to deliver messages from one component to another. The responsibility of the sender ends when it sends the message, but the sender may also be a receiver, listening for a response message of some kind.


The biggest difference in the two schools of thought here is that the event-driven architecture is asynchronous. When a sender broadcasts a message, there’s no guarantee that the response will come anytime soon. This works well for environments like JavaScript where services often cross the client/server boundary in order to fulfill a request, such as in AJAX.

A second advantage of event-driven architecture is the ease with which additional behaviors can be added to an ecosystem. Adding behaviors in response to a message is as simple as adding a new listener for that message; neither the existing receiver(s) nor the sender need to be modified.

Although the asynchronous nature of event-driven applications sounds promising, in actuality, it can create a bit of burdon on the developer. You no longer have any coupling whatsoever (not even loose) between senders and the receivers. This can make it difficult to read and understand code; there are no contextual clues as to who might be listening for a message. This can make it difficult to refactor code if you aren’t able to track down all of the listeners of a particular type of message.

An Example

Since this is in reference to Project: Apollo, I’ll use it as a reference.

The underlying media player in Project: Apollo supports some obvious controls: play, pause, next, and so on. To capture this, I created an interface:

public interface TransportControl {
    void next();
    void pause();
    void play();
    void previous();
    void seek(double percent);
    void stop();

There are several components throughout Project: Apollo that need to be able to control playback. The GUI, for example, has buttons that correspond to these functions. The playlist also invokes these controls to advance the player to the next track.

In a loosely-coupled application, an instance of TransportControl would be injected into the GUI as a collaborator. Then, the GUI could respond to a user click by directly invoking the next() method on the TransportControl. This keeps the code clear and easy to read, but it actually creates another problem. The playlist still needs to be informed when the GUI has invoked a “next” command. Some mechanism must exist to instruct the underlying player what to play next.

In the event-driven architecture, the GUI would simply broadcast a message requesting that the next item be played. In this case, the TransportControl would receive the “next” message and know that it should stop playing the current item. The PlaylistManager would also receive this message and instruct the TransportControl to load a new item. Further, a debug logger can log out the entire process, providing detailed messages about what is happening.

Conclusion and Comments

At this point, I am unsure of the best solution. Clearly there is more than one way to solve this problem and some folks I know would recommend I simply make something work and move on; it can always be refactored later. Good advice for a project on a timeline for budget, but this is a learning experience where I can really explore these types of questions. My instinct tells me the right answer is some manner of balance between the loosely-coupled option and the event-driven option.

So does it make sense to inject the GUI with an instance of the TransportControl? Or does it make more sense for components wishing to control playback to broadcast a message according to their intent? Thoughts?

UPDATE: After reading in more detail on the Wikipedia entries I linked to above, I think the best solution for my example case is a complementary application of both event-based and service-based architectures. Specifically, changes in state throughout the application should be expressed as events, but components that need to invoke certain behavior should do so on a service object or proxy, even if that proxy merely relays an event under the hood.

Leave a Reply

Your email address will not be published. Required fields are marked *