Surinam 2.0 Manual

This manual covers the Architecture, Philosophy and Concepts for the Surinam framework.

Copyright (2008-2010) Samuel Provencher, All rights reserved


  1. Introduction
  2. Concepts
  3. Philosophy
  4. Architecture
  5. Service Block
  6. Action Documents
  7. Service Block Commander

Hello, and welcome to Surinam. There have been some impressive successes in new frameworks and methodologies over the past ten years, driven in no small part, by the rise and validation of the Open Source movement and other forward-looking enterprise efforts, both large and small. We all owe much to the many projects that have come before, blazing trails and encouraging others to innovate in new and interesting ways. Along this line, there is still much more to do and that brings us to what I hope is another new beginning. Enjoy.

It is important to recognize that almost nothing is born out of a vaccuum and that inspiration and innovation comes from many places, including other frameworks like J2EE and Open Source projects like Spring; therefore, we remain humbled and inspired by the work these and other organizations have contributed over the years. The Surinam project is the result of a natural evolutionary process that goes back a decade in my experience as a software developer. Many of the ideas incorporated into the project represent essential bits of wisdom that have been carefully extracted years of bearing witness to development activities, some successful... others were either doomed at the start or developed an odor as time went on.

Challenges & Cascades
Often when I have a disagreement with a thoughful person I am able to trace it back to fundamental assumptions they are making that I am inclined to immediately challenge. In the ensuing discourse, if I am able to get them to take a different view by challenging those assumptions, a small change anywhere can create a cascade that touches nearly everything else.

What Surinam Is
Surinam is a framework that supports a more comprehensive approach to developing software; it strives to achieve a cradle-to-grave methodology that rewards good design and encourages you to think about the implications of a complete software lifecycle that includes long-term maintenance, upgrades, and the software's inevitiable evolution over time. This framework hopes to introduce the concept of "Hyper-Dynamic Software" as a desirable and achievable behavior. Essentially, this is about the benefits of developing software with an embedded Service Oriented Architecture (SOA) inside; Surinam didn't invent it but does offer a way to make it work. Surinam's significance as a project is somewhat moderated as the example that proves the rule; like Bach's "The Well-Tempered Clavier," the project itself is an implementation that argues a position of a theory which may someday prove to be more valuable than the implementation itself. In fact, Surinam is a manifestation of a collection of software philosophies, practices and methodologies (some existing and some new); it's as much about the how we develop software as it is about the software we end up deploying. Aside from being compatible with your favorite Java methodology and framework, Surinam seeks to extend development methodologies by suggesting that architecture and design doesn't end with delivery or with deployment.

What Surinam Is Not
Surinam is not not intended to replace any of the great frameworks available to the Java community. Instead, since Surinam is embeddable by design, which means that it can integrate with all of the frameworks, containers and other technologies that are out there. It is not a panacea but another way to solve a particularly difficult problem (Software Dynamism) that may not even be a concern for some categories of software.


Anyone who has been in the software industry for more than five years understands the difference between what we think is the right thing to do... and what almost always ends up happening. While I applaud efforts to create methodologies that work better than what much of the industry uses, the truth is that few of them will ever become widespread. I suspect that this is primarily due to how software development is financed and managed, not to mention a lack of skill and understanding that is both overpowering and breathtaking. The Surinam framework has aspects of its architecture and design that are driven, not by what makes for clean theory but, by how it offers unprecedented flexibility to integrate into any project. And while there is a general trend in frameworks toward a POJO-centric world, for some things it is a move in the wrong direction; sometimes, formalism can be a force for good and is necessary, especially when you are changing some of the other rules that alter the balance of things.

At a technical level, Surinam can load new interface definitions and assign implementations to them... at runtime without stopping. This is made possible because of the robust, dynamic loading, routing and invocation mechanism that lies at the heart of the Surinam Framework. If this sounds like a distributed ESB or Web Service architecture you'd be on the right track, however, this all happens locally and internally without losing the benefits of resource sharing and the efficiency of standard Java invocations... no marshalling, unmarshalling, XML parsing or serialization necessary. The efficiency and control of a unified local application is preserved while the dynamism of an SOA becomes available. The types of architectures that could leverage this behavior is left open-ended, and once you have solved some of the hardest and most restrictive problems and then it becomes just a matter of policy and design. This is all about how this happens and what it means, but the real payoff will be to discuss what you can do with this, once you've changed the rules.

Hyper-Dynamic Software TM
Let's forget about the limits of technology for the moment. In a perfect world, we would be able to write software and just have the changes "show up" in a running server auto-magically. However, to keep things from devolving into a chaotic mess, we need to temper that idyllic view with a small amount of engineering discipline that says we should (at the very least) to be able to gather those changes up into discrete releases to keep the whole thing manageable [Things seem to go so much better when we know precisely what software is running at any given time]. Once we had such a set of changes, we could introduce them to a runnning server as an upgrade and the new code paths would come online just start working... that is hyper-dynamic software in action.

Agile & Temporal Design
The ability to update entire sections of deployed software in a controlled manner without disruption is a game-changing development... this is hyper-dynamic behavior. This affects more than just deployed software but everything leading up to it; no rolling upgrades, no bouncing the server... the impact of this can be enormous for certain types of software.

"Temporal Design" implies that the passage of time becomes an integral part of the design, starting with decisions about how much of the application needs hyper-dynamic behavior to issues of service granularity. In time, we may find that this new ability can have a profound effect the process of developing software and how freeing ourselves from long-held assumptions can radically affect our software designs.

Agile philosophy includes the concept of "embracing change" as software is being developed. Surinam dares to ask, "what if you could continue to embrace change after the software is delivered, deployed and running. As teams continue to build out new functionality, what if that functionality could simply be added to the existing software, while its running. The Surinam hyper-dynamic framework makes this not only possible, but relatively easy to do.

Once you change the rules, you change the choices. If you have a large application with a ServiceBlock embedded in it and it contains just a single service, you still have the ability to upgrade and/or modify that service at runtime... that's pretty tiny but still dynamic. Alternatively, if you wanted to, you could implement an entire application as a federation of discrete services (SOA-style) that are all woven together inside the Service Block, then you might have upwards of 90% of your entire application's business logic temporally designed and hyper-dynamically deployed. Note that this will work whether you are running on a lightweight web server or a full professional JEE stack; it will work embedded in a single Servlet, inside an EJB or in a stand-alone desktop Swing application. In short, Surinam can go anywhere Java can go.

Thinking Multi-Dimensionally
We tend to design and deploy our software in one dimension based on monolithic, static deployment scenarios. If you imagine your application, whole or in part, as an aggregation of services (an SOA) that are individually versioned and upgradable, time can become a new aspect of your design. This aspect of how a given application is designed to evolve over time is a feature of Service Oriented Architectures that is heavily leveraged in Surinam's approach to building software.

In the diagram to the right, we can imagine snapshots of an application's service graph over time where some pieces remain static while others change over time. The "Service Graph" changes granularity, adds new behaviors and even provides new entry points that are part of a larger, traditional, application upgrade that possibly wasn't part of the original design. There are a few key features of the framework that make this possible, service isolation, support for sufficient granularity to isolate 'silos' of functionality and most of all, accounting for temporal factors in the initial, overall design.


Uptime Is King
There's nothing more annoying than having to bounce a server or redeploy an application every time you make a change. Whether you service a multi-billion dollar international corporation or just selling cookies for your church, having a website go down has an impact. While no framework can solve all your problems, having the ability to upgrade your software applications without having to take them offline can be a pretty big deal. This is particularly important for enterprise applications where they might have a small piece of software that needs to change at regular intervals (like promotions) but have no idea what those changes might be ahead of time.

The Surinam Hyper-Dynamic Framework makes it possible to replace entire sections of your application on the fly, no restarts needed. To this end, Surinam introduces "Action Documents" which are essentially administrative actions captured in XML. We will cover them in detail elsewhere, but one of the reasons for defining Action Documents as simple XML is to allow them to be transmitted easily over a network. This approach allows for solution patterns where persistence is part of the equation in addition to transmission and possibly even scheduling.

Service Oriented Architecture - An "SOA in a Thimble"TM
I have a suspicion that one of the reasons that SOAs have become so popular is because, years ago, what people had been trying to do wasn't working very well and even though the overhead of an XML interface is costly, it works; people can understand it and the benefits outweigh the penalties in many cases.

Often during a project, entire areas of functionality are compartmentalized, not because it makes technical sense but because it is easier for management to track on a project schedule and it is given a name so it can be said that it is "scheduled" as a self-contained piece of functionality; this is not necessarily a bad thing. Converting a bit of functionality to a discrete service not only decouples it from the rest of project in a development sense but in a management sense as well. Suddenly, that "service" can have its own independent development and release schedule, which makes it easier to hand off to another group should that be necessary - the rebirth of distributed development. Also, there's the very real dynamic involved when a development center in California cannot coordinate with New York in any manner that is useful, so things just work better if we can minimize the interaction between the groups as much as possible. This SOA trend has been fairly successful as we are seeing applications more and more becoming a federation of services or a mash-up; however, now the whole XML issue has then given rise to the Enterprise Service Bus (ESB). An ESB is a code word that often means, "we like the isolation and decoupling of services but still hate all the overhead." Sometimes that issue is addressed by eliminating the XML entirely for some ESBs.

The Surinam framework tries to preserve some of the best things of an SOA like having the ability to publish a formal "Interface" and to have that veil of separation (isolation and versioning), but these services can be designed to function as parts of a unified application, which is what most of us really want. In order to get that level of intimacy, we trade away making services generally available to lots of competing applications; Surinam Services are meant to be shared by threads in an application running in the same VM, not by threads running in different application in different VMs. Note that this does not prevent reuse since any of the services can be reused in another application's Service Block. You can even use distributed development scenarios, building services completely in isolation from each other on different sides of the planet and put them together later. You can develop new services that were not part of the original application and evolve the software in new directions. However, since Surinam Service Blocks keep all services together locally, execution is faster than distributed SOAs.

On the other side of things, we want to pretend that it's not an SOA at all, but just a standard, monolithic Java application made up of a unified set of compiled classes we would like to be able to use those classes in faux naivety since that can simplify our code and lowers the bar on certain esoteric skills. Unlike the standard Java approach, Surinam Services extend what we can do in by offering formally versioned services and implementations. In fact, you can even do some other tricks like run different versions of the exact same class side by side, which Java does allow as long as they are registered as implementing different interfaces or exist in different class loaders; Surinam manages this for you.

The Problem with POJOs
Having frameworks that simply allow classes to plug in as POJOs are the general trend that seems to be a good thing as it dispenses with much of the formalism that takes so much time to deal with; the downside is that it can devolve into an ad hoc programming model which hurts the visibility that conveys a clarity of intent. Whatever, model you choose, Surinam becomes part of the local application code while continuing its SOA roots; for this, a hybrid approach is justifiable and appropriate. To that end, Surinam Services require that you implement certain interfaces to declare their manageability and for efficiency (interfaces are a compile-time activity). If you have an object that you wish to use but cannot alter the class to implement the required interface(s), you should implement what I refer to as the "Injector-Shell" pattern which translates to treating the POJO as pure business logic and wrapping it in a class that has knowledge of its environment, therefore shielding the object from needing such knowlege.

The Emperor Has No Clothes
As developers, there are few tools we use that are more prevalent than source control systems. These invaluable systems use a model that says that a file is the same file based on its name and location and that a version or revision of that file lays right on top of the old version. This model reinforces the fact that no two versions of a file can coexist at the same time (that temporal issue again) because of an annoying implementation detail of how source control systems work in relation to computer file systems. In this chicken and egg scenario we are pushed by our technology to do things one way, and languages work the exact same way; there can only be one version of one file in one place at one time. The danger is all of this is that while this model has been serving us, it has also been training us; it has been training us not to think multi-dimensionally. Being taught to avoid thinking this way about our code that may very well be completely wrong - we will examine this.

Published Service Interfaces are Immutable and Versioned
Imagine a scenario where your source control system actually encoded revision information into the files you see in your sandbox. Since source control is also restricted by limitations of the file system, it would have to get around the restriction of one file with one name in one place by changing the file's name to encode the revision information; as impractical as this seems, at first, it would at least allow multiple versions of a class to coexist in your sandbox. Now if Java supported similar behavior, it would perform some similar encoding in the VM. As many of you already know, classes are in fact encoded this way with a discriminator based on the class loader to allow multiple instances of the same class to be treated as if they were different classes altogether, Surinam uses this behavior for service implementations so that we achieve service isolation and support for multiple versions of the same interface. Surinam is simply suggesting that we extend that obvious solution to the file system as well.

To understand Surinam's approach to interfaces we need to examine what consitutes a version of a thing. Simply put, versioning is a mechanism that allows us to distinguish one thing from another and possibly implies an order or sequence; this remains true even if the two things are different compiles of the same class at two different points in time. Forget about how we traditionally label software versions with sequential numbers since that does not manifest itself at runtime; instead, consider that there is nothing in this definition that says it has to be exactly one way. A sequence could be {Aug.1.2005, Sept.2.2005} or {A.a, A.b, A.c,...}, we can tell them apart and we can infer order.

When it comes to formal interfaces, Surinam is both concrete and specific based on the following class loader logic:

Figure 1.

Rule A: Since classes can only see upwards in the loader hierarchy, peers cannot 'see' each other directly (figure 1 - A cannot see B).

Figure 2.

Class implementations that are loaded by peer class loaders (A & B), in order for them to communicate directly, they must implement a common shared interface made visible by a class loader hierarchy that puts the interface in a common parent. Figure 2 shows the a common interface loaded into the parent class loader (P) and the implementation of the interface loaded into B. Also, the client who consumes the service 'Foo' is loaded into a peer class loader (A) which can see the common interface in P, it just cannot see or access anything in B. If there were a way for the client in A to get a reference to an instance in B, it still could not not use that object unless it is able to cast that reference to the common shared interface in P, this is an essential part of what the Surinam framework does.
Figure 3.

Rule B: In order for different versions of the same interface to coexist in a common loader context, they need to have different names in order to avoid naming collisions. Notice that the published service interface 'Foo' is versioned via a numeric name extension. Figure 3 (above) shows two service versions together in a single common loader context with two implementation versions, each isolated into separate class loaders below the parent (the similarity between the implementation versions and service versions is purely coincidental).

Surinam reconciles both rules by putting a stake in the ground by suggesting that we have may been using the wrong model for formal interfaces for a long time, long before Java was a language. The argument against this practice is usually about manageability. All the hand-wringing about class duplication goes back a very long way, arguing that it hurts code reuse; whether or not this is true, it has nothing to do with interfaces since they contain no business logic and most modern IDEs tools allow interfaces to be generated, duplicated, and renamed with ease. To try to make the argument that an interface should not be versioned is to ignore the different ways an interface can change; to say that as long as you can get away with it, you want to be able to swap implementations without the client's knowledge opens the door for the kind of sloppy semantics that litters most software today. Surinam argues that explicit versioning represents, not the formal interface we all normally see but the "Semantic Interface." This is where current industry practice gets problematic; when an interface experiences a semantic change, it is not reflected in any material way. Ttraditionally if an implementation suddenly alters the semantics of the parameters and doesn't reflect that in the interface, very bad things can happen. One reason that it is bad is because the software might appear to be working correctly but be doing the wrong thing due to a semantic change.

Take the following three signatures that are different versions of the same interface method over time, what do they have in common? What they share is that they are interchangable and your code would still build and run; my guess is that you would be surprised, if not unhappy with the results. You would expect that a developer would catch this mistake based on bad results but that ignores the cause. It is clear that the cause is the inability of the interface to capture the semantic change (technically correct + sematically wrong = broken software).

public GeodeticPoint getVectoredPosition(double offset, double lat, double lon);

public GeodeticPoint getVectoredPosition(double lat, double offset, double lon);

public GeodeticPoint getVectoredPosition(double lat, double lon, double offset);

Once we get past the initial interface argument, we see that Surinam takes a more universal view of standardization, just as if there was a factory that built screws and you want to build a device that uses a particular type of screw. You don't want to ever have to worry that catalog item "SK-4901" would at some point, change its thread ratio and no longer fit; it would be chaos and the world would not work. Plainly put, the Surinam Service Definition Model says that once you publicly release (or issue) a version of an interface it is immutable. This means that if someone on the other side of the world has a specific version of a service interface, they can create and release an implementation for it and you can consume it and not worry about which version it is. This is the Surinam view of service definitions in order to make it possible for a universe of implementations to become available (perhaps as Commercial or Open Source pluggable implementations) based on immutable, published interfaces. Additionally, each successive semantic change can be captured in a version change and the documentation can then discuss exactly what has changed since the last version.

Therefore, there are two things that could cause an interface version to change:

  1. If any signatures change (standard interface expectation).
  2. If there is a semantic change that would not be normally reflected in a traditional interface. There is value in being able to know that the first parameter that used to be a user name is now a destination name.

By altering the interface name extension, it will require all code that wishes to move forward to consume the new service version be explicitly moved forward; this is preferred to an automatic blind migration that occurs in a hidden and possibly dangerous manner that could introduce bugs. To discover these usage points in your code is trivial with modern tools and it ensures that some thought is put into the implications of moving to that new service version. For cases where a signature changes, your code immediately breaks at compile, but we can now see cases where there is a semantic change in the signature, where a compile would succeed but the software could fail, that is hopefully called out in the service documentation since services might be acquired from an external source.

As you might now be thinking, this addresses a flaw in how we communicate with each other about a service's capabilities in a way that should work better across groups that don't interact.

All Invocations are Local
Surinam encloses all Managed Services in a Service Block, which is defined as a discrete object which houses a "Service Graph" along with some additional internal support structures; Surinam is part invocation framework and part matchmaking service. The client is not normally allowed to hold on to the actual implementation reference; instead, it gets a proxy object that implements the service interface (not unlike JEE) and when the client wants to make an invocation, an Interceptor will perform a lookup on the fly and make the invocation on that implementation, not unlike a Stateless Session Bean. Surinam extends beyond the JEE invocation model by being Hyper-Dynamic in that the implementation instance that gets used for any given invocation is allowed to change at runtime. If you are familiar with the JEE pattern for SLSBeans, remember that a Service Graph is made up of individual instances and not of instance pools so you will have to account for this when implementing stateful types of processing.
Since the client is either in the parent context (outside the Service Block) or a service inside the block, and all services are inside the block, all service invocations are local.

Implementations May Be Versioned
Since Service Contract Definitions are required to be kept together in the same top-level class loader (Service Loader) inside the Service Block, they must be versioned (with a scheme of your choice) to avoid naming collisions. Implementations do not have this requirement since each implementation gets its own class loader that hangs underneath the Service Loader. By design, all service definitions are published, shared instances and implementations are not required to be.

Dynamic vs. Static Relationships
Service references create relationships between Service callers and Service targets and can be characterized by the fact that proxied references are managed references and static references are unmanaged by the framework. While they will both implement the Service Contract and can be used the same way, one will be a generated proxy that relies on the framework to function as a "match-maker" to find invocation targets and the other will be an actual reference to the Provider Service Implementation itself; the latter being a standard Java reference. This can be extremely important and must be well-understood before mixing these two types of references in your Service Graph. The goal of intelligently mixing these relationships is to achieve the perfect balance of dynamism and efficiency. It is not hard to figure out that using proxies for every service call will introduce overhead that is always undesirable and may be avoidable. The benefit of a static relationship between two services is that they can be more efficient; the penalty is that they are permanently joined and the onus is now on the developer to manage that relationship as the framework has been removed from the call chain; this arrangement is called a "Service Assembly."

Service Assemblies
Assemblies are sets of things that, once they are put together, are meant to be used as a single thing (like a car's transmission). In Surinam, a Service Assembly is a group of services that are bound together with static references; among other things, this means that the relationships are unmanaged by Surinam. Since assemblies are defined by the nature of held references, assemblies as a concept is orthogonal to Service Contracts. Assembly definitions live only in the implementations, so it is possible that asssemblies may come and go based on architectural or design goals independent of whether there is any change in the Contract(s). Once the Services are instantiated and injected with those references, the relationships will never change unless explicitly modified programmatically; this is how standard java invocations work in non-hyperdynamic software. The illustration below shows a Service Block with a number of services deployed... a Service Graph. Inside the graph we have hilighted a Service Assembly in which, some of the injected services are static references. For the sake of setting apart Services that are part of the assembly, we have shaded the nodes but otherwise there is no difference between those services and the others.

Service Assembly Diagram 1

In the diagram below, we have blown up, labeled and extracted the assembly and its surrounding services so that we can discuss the implications. Entry Point X holds a standard Surinam reference to Service A while A holds a static reference to B. Node C also holds a static reference to B but even if it didn't, node C would still be tied to B in the same assembly because A holds static references to both B and C. What this means is that even in the event of graph reshaping that retires or replaces B, Services A and C will still hold their references to the instance of B. To summarize, A holds static references to B, C and D; D holds C which holds B. The semantics of these relationships (the assembly) are that they will need move together during reshaping

This dynamic is exemplary of the "sharp tools" paradigm; it can be the key to maximizing the efficiency of a running system while also preserving the appropriate balance of dynamism. It is also possible to deliberately migrate the Service Graph in a way that gets a bit clever with what happens if you choose not to migrate the assembly as one unit.

An example of a multi-stage reshaping of an assembly might be warranted when full reshaping of the graph might take too long to achieve in one step. It might be possible to reshape the graph in such a way that you replace some of the pieces of the assembly in preparation of then executing a smaller reshaping that upgrades the entry points to the assembly at a later time. This allows the replaced services to continue to function despite the fact that the Service Block no longer manages them. This could be useful in some software upgrade scenarios.

In Phase 1 (above), we replace Services B, C and D; since A holds static references, those are unaffected by reshaping and the new ones B', C' and D' are mostly unused except for the possibility of X calling to D'; a call from X to A, however, would continue to work as it always has.

Assembly Phase 2

In Phase 2 (above), we only need to replace Service A. With no references holding on to Service A and no way to make invocations on any of them, the assembly becomes disconnected and the Service Implementations will eventually be removed. In the mean time, once A' comes online after reshaping, all of the new services will work as before, even with a new set of static references. It may not be obvious from this example but Service Assemblies can also come and go with Graph Reshaping. You can introduce an assembly where there wasn't one before, making your deployment more efficient and you do not have to replace an assembly with another assembly.

Service Invocation Routers
As of version 1.2, Service Invocation Routers become available to developers when using the Finder to acquire a Service proxy. These are the most important part of a proxy in that these are classes that implement the invocation business logic. Exposing this capability makes it easy to implement custom functionality such as logging every invocation, refreshing a cached service every N invocations, or maybe creating Contract specific routers that are designed to do special things (see Routing Adaptors).

Writing your own router simply involves implementing one, maybe two interfaces as follows:

Routing Adapters
With the ability to provide your own Invocation Router when you create a proxy opens the door to writing non-generic interceptors that contain code that has specific knowledge of the Service and the data involved. Routing Adapters for example, might know about the details of how to take the parameters destined for one service and turn that into an invocation on a different Service. This essentially allows developers to create software "bridges" between client code that wants to consume one service interface but have that call actually "serviced" by a different Service that has a different interface.

Advanced Invocation Routers
Although this is not currently recommended nor tested, it's worth mentioning in passing that it might be possible and potentially advantageous to write routers that do not stay local. These might manifest themselves in two different ways. First, you could simply deploy a faux service that provides an API but gets its functionality from an external location (possibly a web service) This service would be consumed in the standard manner and clients would not know that the call goes offboard; alternatively, the router itself could skip the service lookup and go offboard directly.

Custom Proxies
As previously mentioned, Surinam generates proxies that implement interfaces that represent Service Contracts that are deployed to the Service Block. As of version 1.2, the default proxy you get is optimized to avoid performing a lookup with every invocation by acquiring a static service reference initially and caching it. When you make an invocation, the Invocation Router will intercept the call and invoke the method on the cached service implementation instance. This works because the router instance handles a lifecycle callback provided by the Service Block when the Graph is reshaped. This method is called "invalidate" to convey that the instance being held could have been rendered invalid as the result of reshaping.

NOTE: Creation of custom proxies must be done programmatically and are not supported by Action Document injection.

The "Injector Shell" Pattern
Sometimes when working in JEE or other containers, you are faced with a situation where you want to isolate your classes from having any knowledge of the container itself. To this end, you create a lightweight wrapper that implements all the interfaces you need and has some code that handles acquisition of any required resources (which hopefully implement formal interfaces so they can be mocked for testing), and uses Dependency Injection (Martin Fowler) to make sure that the class instance you create has all the resources it needs to initialize and operate.

The "Resource Broker" Pattern
When you have services share resources in a service block, you can inject external resources into a service that is designed for that particular purpose. This specialized service would not normally be dynamic although the resources it holds may be. Normally used for such things as system services such as configuration, a broker service would not contain any business logic and multiple services could have dependencies on it. A similar type of object could function as a primary model used to preserve state; such a service would survive reshaping even if the dependent services would not.


We will keep this section brief but a little sharing might be good for the soul. Basically, over the years there have been projects that needed to be able to dynamically load classes in such a way as to make at least part of the application dynamic or pluggable. While frameworks like JEE and Spring are great for building server-based applications, I have always been a bit surprised that no one seems interested in doing this in a fine-grained, dynamic manner. As I discovered, this is not a terribly easy thing to do and is quite beyond what many engineering groups are willing to take on for internal projects, although many do with mixed results. At a more fundamental level, there has to be a certain amount of formality in order to manage it all, which is counter to current trends in framework architecture but are the standard in SOA; and it is harder still, to create a generic framework that will do all of this for you while keeping the learning curve shallow.

Software Perfection
Most people struggle with the concept of perfection. We try to create software that never fails and in the process, we fail to eliminate failure. Surinam embraces the philosophy that a perfect system follows a more natural model than this. Such systems that we might consider to be perfect are that way because they are in perfect balance, not because they have no imperfections; they succeed despite their imperfections. One of the things we see is that the best, most balanced systems are ones that can suffer failure and still continue to function as designed. A dog with a limp can still eat, play fetch and reproduce; among those of us that have ever had a beloved pet, it's hard to argue that such systems do not continue to provide value. If you were late for an appointment, I bet you would rather have a car that makes a funny sound and has no accelleration than one that won't start; at the very least, you would have preserved the choice of whether to drive it or not. I might even suggest extending the Agile mantra of "Embrace Change" to include "Embrace Change, Embrace Failure, Achieve Balance." If your design does not achieve this goal, then consider that you might just be trying to solve the wrong problem; after all, it is a poor craftsman who blames his tools.

Sharp Tools
There are at least two schools of thought with regard to defensive programming. One rationale favors the equivalent of child-proofing your software, not against users but with each other. Each developer treats each other developer as if they needed to be protected from their code. While there are situations where that may be warranted and that opinion justified, there is a difference between ease of use and simply trying to overcompensate for a lack of developer discipline, proficiency and/or laziness; the price you pay for this is just too high since a lot of defensive coding hurts the design, the efficiency of your implementation along with its potential for reuse. An alternative is to recognize how masters of more traditional skills, like a wood carver, will always maintain extremely sharp tools. He/she knows which end of that tool is sharp and is expert at wielding the tools of the trade; and in the end, this process tends to produce items of superior quality. If it is not obvious already, the preference here is for sharp tools, so do the work and understand which end is the "sharp end" before you use it.

Some people like staying on the path because the going is easier as it is laid out for them but others will prefer to wander in the woods; the latter will sometimes come upon something new and surprising that no one knew was even there. However, effort will be made to expose APIs for your use, they will be called out and documented; we will not go out of our way to prevent you from trying alternative ways to use the software should you choose to. In fact, we encourage experimentation since the results will help drive the project in new directions. It is the goal of the framework to provide powerful software that does powerful and complex things; should you choose the equivalent of driving a screwdriver into a two-by-four with a hammer, it's your time to waste. All Surinam asks is that you respect the complexity of the problem it addresses, as it is decidedly not a project goal to dumb this down any more than you would give a monkey a hand gun just because the safety's on; so in keeping with woodworking wisdom, "Measure twice and cut once."

Java supports the generation proxies on the fly at runtime and leverages the use of interceptors. From this, came the thought that instead of using the interceptor to insert some additional processing before passing the invocation on to the invocation target, to turn it around and use the interceptor to act as a routing switch, directing invocations to different targets. This pattern became the one of the core principals that makes hyper-dynamic software possible, it's less about localized interception and more about localized invocation routing.

Much of the rest of the framework is about how to manage that little trick in such a way as to be as useful as possible, which meant solving a number of other vexing problems. As it turned out, the solution was realizing that combining a class loader hierarchy with encoded naming for immutable versioned service definitions allows for new interface definitions to be introduced, since any hangup over service versioning is self-inflicted.


At the highest level we can begin with a simplified view of Surinam and the role it is designed to play. Basically, you have a container that holds and manages some services inside called a "Service Block", you do not normally have direct access to these "Managed Services" but that veil of separation is designed to preserve the block's ability to protect a clear chain of responsibility for the services. The Service Block has an API to allow you to give directions programmatically if you wish that level of control and responsibility but you will also have the option of using Action Documents to manage the block. Action Documents are XML documents that conform to the Action Document Schema defined by Surinam; these documents can be "applied" to a Service Block, allowing you to configure the Service Block using meta-data alone.

Service Blocks are designed to be embedded inside other applications that you develop, where you wish to have parts of your application to exhibit Hyper-Dynamic Behavior. This essentially means that you can deploy your application with a number of services inside the Service Block that you use as part of your application, but you retain the ability to change and update those services at runtime.

While it is designed to solve a problem for any server-side applications where uptime and flexibility are both important goals. However, there is nothing about the framework that would prevent you from using it in a stand-alone Swing application if you wanted to, any Java app will suffice.

Service Blocks
A Service Block is the "all encompassing" and complex structures in the framework, it might help to think of it as a micro-container; however, unlike a micro-container you deploy formalized services to a Service Block like an SOA. Like some other frameworks, Surinam offers a Dependency Injection annotation model for automatically connecting services to each other, however, injection is not required and you always have the option to write your own code to do special things.

ServiceBlock serviceBlock = new ServiceBlockImpl(this.getClass().getClassLoader(), null);
ServiceBlockAdmin serviceBlockAdmin = (ServiceBlockAdmin)serviceBlock;

In the snippet above, we are doing two simple things, creating a ServiceBlock (passing the current class loader and null for a classpath) and we are casting the block to an alternate interface. Surinam filters access to its API through formal interfaces. UNIX users are familiar with the security model of preventing certain operations based on permissions; even though most users could easily become the root user and have access to everything, they don't as a matter of practice, only becoming root when they need it to do something that explicitly requires root permissions. By breaking out different types of operations into discrete interfaces it becomes easy to make sure you are being explicit in your intention and allows you to use only the level of access that is necessary for a particular task. For example, if there is a client that is only supposed to be a general consumer of services, it should not be using the admin interface; if it were, that would be a red flag that the design might need refactoring.

In this case, Action Documents can be used for most of the administrative actions that the admin API would be used for, but some people prefer a "stick shift" on a car instead of an automatic. To this end, the Service Block has a general API and an administrative API which grants lower level access to its inner workings, for those times when you need to do something internal and Action Documents are too automated for your taste.

Service Graphs
Service Blocks take sets of services and manages them. One of the ways this happens is by 'weaving' sets of services into an interrelated web of services... a directed graph. The types of graphs you might build for your application can take a few different forms but usually they are directed, with some combination of connected, non-connected or hybrid types (a few examples are shown below).


In the diagram to the right, we see Service Block containment of a Service Graph. There are a few things we can notice from this depiction; out of the nine services, only two appear to have call chains that begin outside of the block, these are known as "Entry Points". On the other side of things, we see two nodes that have no dependencies on other services. Services that have no dependencies are referred to as "terminal services" as they terminate call chains. It is possible to have dependencies reach outside the block but that type of service is not shown here and will be covered elsewhere.


It is pretty easy to see that what we have here is a big software construction set to play with, and in a different context, this same diagram could be used to describe an SOA; the similarity is not coincidental. The limitations on a developer building complex Service Graphs are largely derived from the complexity that comes from the developer and from what the machine the software runs on will support. This is a primary difference between Surinam's approach and a traditional SOA; while SOAs are designed to scale up well, they don't tend to scale down very well, whereas Surinam is designed to be more than just local but embedded inside, it's just more... intimate.

The Service Directory
A Service Block is not simply an empty container where you deploy services, there are some supporting services that are part of the framework; one of the most important of these is the Service Directory. Just as one might expect in an SOA or JEE system, there is a directory service; in Surinam, every Service Block has a Service Directory (of services).

The job of the Service Directory is to keep track of the mapping of formal service definitions to service implementations and to deliver them when asked. This component allows for new implementations to be swapped out for existing implementations at runtime with minimal side-effects, a diagram that refines the Service Block view is included below.

In the diagram, we have updated our view to show that the Service Directory is the component that has the responsibility of managing the services at runtime. The Service Block itself stays out of the way as much as possible at runtime, implied by the two Entry Points that allow the Service Graph to be driven from outside the block.


The Blueprint Manager
First of all, Service Definitions and Provider Implementations are defined in formal structures as annotations, XML and as objects. These definitions amount to Service Blueprints that specify and describe how a Service is to be built, deployed and some additional information covering where to find the requisite resources. When this meta-data is used to construct and deploy a service, it is not thrown away. Instead, it is handed off to the Service Block's "librarian" who maintains a detailed view of the state of the Service Directory that far exceeds simply tracking service mappings, this is the BlueprintManager. In addition to maintaining a synchronized view of the directory, the BlueprintManager has the ability to deliver an Action Document that puts all of that information into a formal, schema-compliant text stream that can be persisted and reused. The implications of this will be discussed in the detailed coverage of this component.

Software Contracts are Service Contracts
Nearly all of Surinam is focused directly or indirectly on the issue of service definition and management. A primary aspect that has consumed a lot of time is the consideration of how services are defined. Just as with an SOA, and to a lesser extent all programming, the formal definition of service interfaces is at the very apex of importance, for reasons that we should not have to go into here. To that end, a well-defined Java interface converges and harmonizes the dualistic concept of a Software Contract as an entity that is synonymous with Service Contract (from an SOA perspective) complete with semantics, which in Surinam is taken to the next level of seriousness to the extent that a Software Contract is recognized as a formal concept to the framework. However, for those readers just joining us, published Software Contracts (interfaces) are immutable and versioned in perpetuity.

Service Block

Creating a Service block in your application programmically is a simple matter of one line of code. You will notice that formalized interfaces are pervasive throughout Surinam to allow for the possibility of substitution of any of the key pieces for testing and extensibility.

ServiceBlock serviceBlock = new ServiceBlockImpl(this.getClass().getClassLoader(), null);

Service Contracts are Software Contracts
It is an implementation detail that these entities are implemented as formal Java interfaces, so generally within Surinam, we refer to these concepts as Service or Software Contracts to help keep us from getting sucked into implementation-centric thinking despite the need to understand things at that level of detail.

Service Contract Identification & Registration
Services are identified by the fully-qualified name of the defining interface, like many other problems solved by the Java package structure, this helps avoid collisions between different organizations who might define and publish formal Service Contracts. And as previously mentioned, it is recommended that you use version numbers as part of the service name to allow different versions of a service to coexist in the service loader without conflict. However, if you are not ready to take that leap just yet, is just fine to confine yourself to simple interface names if you don't mind losing the functionality provided by that level of precision.

Service Name Versioning & Immutability
One of the more contraversial of Surinam's practices is the idea of Service Identification having version numbers embedded in the name itself. Admittedly, this is a hack to compensate for the lack of computer languages being able to support versioning in more elegant ways but that is how we manage to live with technology. In any case, here is some of the logic behind the decision to go this route counter to prevailing theory.

  1. Library Upgrades - We code to a single interface to allow swapping of libraries to newer versions without impacting our code.
    Well, this practice conveniently overlooks a key weakness of this practice, that is the case where the the interface doesn't change but the library has new semantics. All of a sudden, the meaning of what a function returns changes and that values you get back are now out of alignment with what your code is using those values for. Unversioned interface do not account for semantic changes in an interface.

Class Loaders
In order to make hyper-dynamic principals work, we need to maintain a level of controlled isolation provided by having a hierarchy of class loaders.


In the diagram to the left, we see the hierarchy of class loaders that live in the Service Block. Each implementation is loaded into its own class loader, which is a child of the Service Loader, which is a child of the embedding application's class loader.

An important missing piece is the fact that these are custom greedy class loaders that the framework uses. This guarantees that if the implementation is delivered with libraries and other supporting classes, they will be loaded in the implementation's class loader, completely isolated from other implementations. A caveat here is that you must not include your service definitions (interfaces) in the implementation's deployable jar files. This will cause them to be loaded into the Service Loader where they are visible, and shared, across all implementations.

To explain how implementations that are loaded in different class loaders, in complete isolation, can still communicate with other services we have the following diagram. The trick is to make sure that both the service implementation and the client that consumes that service, share the same interface (Service Contract).

This arrangement is analogous to how a business needs to provide a way for customers to contact them (phone number, etc.). However, just providing the local number without an area code means that when a potential customer dials that same number from their location, they may not end up reaching the business. In our case, the service class loader holds the interface definition and is like an area code. Because the service loader is above the implementation loaders in the hierarchy (diagram to the right), we can make sure they can reach each other; putting service interfaces in a common shared loader guarantees that they will be able to talk to each other.


And at the risk of pushing the business analogy too far, service versioning is like having a law firm take on a partner. In this case, the original lawyer agrees to start a new business where they combine their respective expertise (offering new functionality), but wants all established "clients" to still be able to reach him at his original phone number. They hire an assistant to answer the phones and when someone dials the original number plus the extension 0, the greeting is, "Fubar Associates" and if they dial the number plus extension 1, they get "Fubar & Howe." In this analogy, the business is the service and phone extensions are versions of that business with different versions providing different sets of sub-services (method calls).

Service Graphs
Service Blocks take sets of services and manages them. One of the ways this happens is by 'weaving' sets of services into an interrelated web of services... a graph. The types of graphs you might build for your application can take a few different forms but usually they are "directed", with some combination of connected, non-connected or hybrid types; a few of these are shown below to help visualize the concept.


In the diagram to the right, we see Service Block containment of a Service Graph. There are a few things we can notice from this depiction; out of the nine services, only two appear to have call chains that begin outside of the block, these are known as "Entry Points". On the other side of things, we see two nodes that have no dependencies on other services. Services that have no dependencies are referred to as "terminal services" as they terminate call chains. It is possible to have dependencies reach outside the block but that type of service is not shown here and will be covered elsewhere under "external resources".


It is pretty easy to see that what we have here is a big software construction set to play with, and in a different context, this same diagram could be used to describe an SOA; the similarity is not coincidental. The limitations on a developer building complex Service Graphs are largely derived from the level of complexity that comes from the developer and from what the machine the software runs on will support. This is a primary difference between Surinam's approach and a traditional SOA; while SOAs are designed to scale up well, they don't tend to scale down very well, whereas Surinam is designed to be more than just local but embedded inside, it is a much more... intimate model.

Service Directory
What SOA would be complete without a directory service? Every Service Block contains a Service Directory which controls the mapping of implementations to Service Contracts (interfaces); when a request comes in for a service, the service name is the fully-qualified name of the interface that is implemented and what is returned is a reference to that implementation object. Normally, direct use of the Service Directory is not necessary since the primary consumer of these services is the Service Proxy which is acquired via the Service Finder.

Service Threading - You may have noticed that service implementations are single instances, this is by design. Since the primary value for the framework is running inside a Java server environment there is an implicit assumption that thread management will be handled by the server. The expectation is that an object's code paths will be run in parallel by the threaded requests which requires that the implementations be stateless. Since Surinam follows an SOA approach, any state could be passed along (REST style) with requests; however, it's possible to take advantage of the fact that all invocations are local and that there is a well-defined hierarchy of class loaders. For times when stateful behavior is desired, you either either pass a state object along with ThreadLocal mechanism or use a Flyweight [GoF] pattern to reference an execution context (that would probably live in the parent context); alternatively, you could deploy a Resource Broker Service as implemented in one of the included examples.

Service Finder
A Finder is tied to a specific Service Block which is used in the construction of proxy objects. This is because a Finder is actually a factory of service proxies that are, in fact, not just proxies but "Intercepting Routers." These interceptors will have a reference to the block's Service Directory which gets injected during construction. It is for this reason that all references requiring hyper-dynamic behavior must acquire services via the Finder; if you wanted to have a direct reference to a service that was guaranteed to never need dynamic updates, you could get a direct reference to an implementation by talking to the Service Directory (which you can get from the Service Block). Currently inside the interceptor, the routing strategy is "Fully-Dynamic," which means that every time an invocation is made, an implementation is found to handle that call, not unlike how Stateless Session Beans work in JEE. You are never guaranteed that the actual instance you make two sequential invocations on is actually routed to the same instance each time since the Service Graph may have been modified between calls, so the second invocation could routed to a new implementation.


This diagram illustrates the relationships between primary components of the framework. The Service Proxy is the object that your code will consume, however, since the proxy will actually look and act like the actual POJO, your coding does not need to know that it is using a framework. It is by design that the Finder simulates a typical Lookup or Service Locator pattern where it can assume that it is getting the service itself.

When client code gets a service object from a Finder, what it is actually getting is a dynamically-generated object that is custom-built by the Finder to implement the service interface so that calling code can pretend it is the real thing. This service proxy, also embeds an interceptor which is injected with a reference to the Service Directory


In this diagram, the client code (your code) has used a Finder to acquire an object that implements the Service Contract (Foo_1_0). When the client makes an invocation on that object, the proxy intercepts the call with the Routing Interceptor, which gets the invocation target from the Service Directory, after which, the invocation is made on that target. This accounts for the possibility that at any given moment, that implementation could change through administrative action.

Blueprint Manager
Entry Points, Service Contracts and Provider Implementations are defined in formal structures as annotations, XML and as objects. These definitions amount to Service Blueprints that specify and describe how a Service is to be built, deployed and some additional information covering where to find the requisite resources (jar files). When this meta-data is used to construct and deploy a service, it is not thrown away. Instead, it is handed off to the Service Block's "librarian" who maintains a detailed view of the state of the Service Directory, and the graph that it contains, that far exceeds simply tracking service mappings, this is the BlueprintManager. In addition to maintaining a synchronized view of the directory, the BlueprintManager has the ability to deliver an Action Document that coalesces all of that information back into a formal, schema-compliant Action Document that can be persisted and reapplied. The implications of this will be discussed in the detailed coverage of this component.

Action Documents

Direct programming of administrative actions on Service Blocks to register Entry Points, Contracts and Implementations can take a bit of work and can get confusing. Also, that level of control is usually more fine-grained than is really necessary for most projects, so we needed to come up with a simpler way to perform some of these types of operations. The rationale is that even if you choose to write code internally for all the things you want to do, you still have to solve the administration problem, through which, Hyper-Dynamic behavior can be controlled externally and administrators can get their jobs done. Action Documents are schema-compliant XML documents that specify administrative actions to be "applied" to a Service Block. Applying an Action Document to a Service Block moves the block from its current Service Graph shape, to a new shape or state. We draw a distinction between shape and state since one operation might upgrade an existing Service Node in the graph which changes behavior but not the shape of the Service Graph itself.

There is a philisophical difference between an Action Document and most XML meta-data approaches. There is no concept of initialization of a Service Block with regard to metadata. With JEE for example, the metadata specifies the organization of the application, which is static and the organization is part of the application's architecture and design.

Surinam's approach to metadata is to separate it from the Service Block; we allow for the temporal nature of software and formally recognize that even if your application does have the perfect organizational scheme, it won't stay perfect for long (usually up to a week before you release it). The formal recognition of the impermanence of such schemes makes it impractical to do the software equivalent of getting the orangizational structure "tatooed" across our the application's virtual chest. Instead, a Service Block starts life as a blank slate with and empty Service Graph that provides the minimum allowable functionality... none, the software equivalent of an Etch-A-Sketch. Getting to this point is trivial and so is the learning curve - nothing more is needed. From there you apply Action Documents to change its state and reshape the Service Graph to something more useful, registering Service Contracts and assigning implementations. You even have a choice to do this monolithically with one Action Document or step-by-step with cumulative applications of documents that build your functionality over time.

Class Paths
An important note to remember about any class paths; first, they only point to jar files. Secondly, in order to make these paths as portable as possible, any classpath provided in an action document will be prefixed with the contents of the environment variable SURINAM_DEPLOYMENT_ROOT. This will allow a single document to remain viable on multiple machines when the deployment roots are configured to be different on both machines, as long as the document paths are relative to the root. You also have the option of hard-coding all or part of the path in the document and not set the environment variable.

Implicit in this benefit is the restriction that for maximum portability, all deployed jars will reside under a common root. This should not prove to be much of a limitation at all since you can create subdirectories for different implementation providers or any other organizational scheme that makes sense to you.

Action Documents are about service registration and modification; this is where you specify classes, class paths to jar files, and mappings of implementations to Service Points within the Service Directory. Any given Action Document must be schema compliant and the actions are required to occur in the following order:

<action> & <service-block>
At the top of the document, we have an enclosing "action" tag that encloses a "service-block" tag; currently, there can only be one service-block in an Action Document. The "reset" attribute specifies whether this action should be cumulative or start with a blank slate; this translates to whether the actions should be applied to modify the current state or whether the previous state should be wiped clean before applying this action. As you can imagine, that is a pretty important thing to get right, if "reset" is true, the actions in this document will be the only state in the Service Block when this is done.

<action xmlns:xsi=""
   <service-block reset="true">

Early Retirement
If we have a way to bring new Services and their Provider Implementations into the world, it makes sense that we try to complete the cycle by making things go away as well; Service Contracts can now be 'retired' and Provider Implementations can be 'unbound.' These are two activities are syntactally identical in that the Action Document format is the same in both cases but what they do is somewhat different. The act of retiring a Contract makes it no longer known to the Service Block while in the case where an implementation is "unbound," the Contract remains, effectively undoing the binding action that brought them together.

A retirement action will take the fully-qualified Contract name and make it go away. By "go away," we mean that if the Service Contract has an implementation bound to it, that implementation will be unbound and released, then the Contract itself will be stricken from the Service Graph and released. Furthermore, the Service Block will have no knowledge of the Contract or the Implementation meta-data as the Blueprint Manager is also flushed.

<retire class="com.codemonster.surinam.web.contracts.EntryPointContract_1_0"/>

This action will take the fully-qualified Contract name and release the bound implementation if there is one (not including the default).

<unbind class="com.codemonster.surinam.web.contracts.EntryPointContract_1_0"/>

Entry Points
There has to be a way to make programmatic calls into the Service Block from calling code outside of the block. By definition, this means that an Entry Point must be an interface (Software Contract) that is visible to the calling code; it can be a pre-existing interface or one that you have defined for this purpose. This also means that it will be visible to the Service Block since the calling code should be the parent of the block's Service Loader; note that Entry Points are loaded in a class loader above the Service Block and is therefore, not dynamic.

The good news about Entry Point interfaces is that they tend to be more stable and change less often; a good example might be a Servlet containing a Service Block where an entry point is the "HttpServlet" interface. Since you are inside a Servlet, it's a both a well-known interface and a safe bet that a Servlet can recognize it. Therefore, you could register that interface as an Entry Point inside the Service Block and write an implementation for that interface. What you end up with is essentially an entry point that is a Servlet of sorts as well... an important difference is that it would be dynamically manageable and upgradable. Additionally, this approach would mean that your Servlet code could simply pass along any requests it receives directly to the block; effectively allowing you to implement almost 100% of your application logic inside the block. Of course, if you wanted to leverage Hyper-Dynamic behavior, you would have additional coding to accept commands and to build out the level of management that you want for your application.

An Entry Point is an interface that is visible both inside and outside a Service Block. This is required because it must be an interface that is loaded in the parent's class loader hierarchy, above the Service Block. It is for this reason that the example below does not have a classpath definition.

<entry-point class="com.codemonster.surinam.web.contracts.EntryPointContract_1_0"/>

The Contract is short for Service Contract or Software Contract which refers to a formalized specification of a service interface that is internal to the Service Block. The Service Contract lives entirely inside the Service Block and will have a service implementation mapped to it; if there is a contract that does not have an explicit mapping, a placeholder is asssigned to keep the model consistent.

When a Contract is no longer used, it is no longer held by the framework and is freed up for garbage collection which will happen over time.

Contracts differ from Entry Points in one important aspect, a Contract will have a class path definition that is potentially made up of one or more "path-segments", each of which, will contain a path that points to a jar file containing the class definition. When a Contract deploys, it is loaded into the Service Loader inside the Service Block whereas the Entry Point will get loaded in a parent loader.

<contract class="com.codemonster.surinam.example2.contracts.Service_003_Version_1_0">

Provider Implementations
An implementation can be written by anyone and distributed as a jar file. It might be written by the person or organization who is developing the application but this is not necessary; it might have been downloaded from some Open Source repository of implementations. In such a case, you could assemble your applications a bit like going to the store to buy a few nuts and bolts; hopefully they are just helping you implement what we all hope is a great idea manifested as a great design.

When an implementation is assigned only to one Service Contract and is subsequently replace by another implemenation, the class loader that is used to provide isolation from other services by holding that implementation, is released. When that happens, it will take along with it, all supporting classes that were in that same loader.

An implementation will require the primary class that implements one or more Service Contracts, to which the implementation will be "bound." Service binding is also dynamic and you can bind an implementation to multiple Service Contracts. If you wanted a really coarse-grained Service Graph, you can still opt to program a monolithic implementation. All supporting classes and libraries need to be included on the class path; any global libraries intended to be shared across services need to be made available via a class loader outside the block or by implementing a Resource Broker in the Service Block.

<implementation class="">

Example Action Document
An example of a complete document is shown below. It defines a contract and two implementations and since the service-block element does not specify the reset, this document is meant to have a cumulative affect, rather than blowing away the existing Service Graph.

<?xml version="1.0"?>
<action xmlns:xsi=""

<contract class="com.codemonster.surinam.example2.contracts.Service_003_Version_1_0">

<implementation class="">

<implementation class="">



Service Block Commander

There are three different choices with regard to controlling a Service Block. First, choosing to control it programmatically means more detailed control with a simpler framework object model. The second choice is to wrap your Service Block in a Commander object, this means more complexity to the framework model but can greatly simplify working with the framework as it swaps a lot of coding for a little Action Document meta-data; also this choice gives the developer some interesting additional options for free. And finally, since the second choice is just a wrapper, the implications are that there's nothing you can do with the wrapper that you cannot do programmatically... this remains true. So, the final choice is a hybrid of the two; you can use the "SB" Commander for general things and use code when you need to do special things.

Action Documents in XML form are easily persisted and using the Commander means that you can now converse with the Service Block via Action Documents.

Another aspect of Action Documents is that they are easily transmitted over a network. This means that an Action Document can be broadcast to every corner of the globe, what comes of that would be the result of creative thinking. Note that is is also possible to transmit jar files and classes so it becomes possible to send everything necessary to deploy new applications and services remotely. This could have interesting applications with regard to managing software deployed to grid systems that all have access to a common repository of available jars as you would find in a Maven repository.

Tempus Fugit
The ability to persist administrative actions allows us to build systems that support time-shifting. Leveraging the Commander/Action Document management model makes it easy to integrate the technology with schedulers that can be used to drive these administrative actions in unattended modes at the most appropriate times.

All Together Now
The reason we do rolling upgrades of enterprise systems is for the simple reason that all other viable methods are usually worse. The downside is that for a period of time, the machines in service are running different versions of the software. One interesting facet of the Surinam framework is that by combining a scheduler with the use of Action Documents to reshape the Service Graph, it becomes possible to perform scheduled upgrades across unlimited machines simultaneously, minimizing the version mismatch effect. A scenario might look like this:

An administrator starts with an Action Document (well-tested and approved) that represents the required changes to bring the currently-running application to its new runtime configuration (possibly the introduction of a new service or an upgrade to an existing service). Using an admin tool, broadcasts the document to every node in a cluster telling the onboard scheduler that at midnight, apply the document. This sets up the upgrade scenario so that at midnight, every machine reconfigures its Service Graph in parallel without any lapse in service and without a prolonged period where a percentage of machines are running different versions of the software.

Completing the Cycle
Once upgrades have been applied and everything is running as expected, we can take advantage of the Blueprint Manager's ability to generate Action Documents that encapsulate the current organizational structure. The "BP" Manager will take any number of incremental Action Docs that have been applied and effectively "flatten" those layers into one comprehensive document again. This document can be saved so that if someone kicks the plug out of the wall, a restart can reapply that document and the Service Block will be as it was before.

What is a Software Application?
If you define an SOA application as a federation of services then what happens when you upgrade some of the services that make up the application? Is it the same application? What if you swap out almost all of your code with upgrades that dramatically alter the services it uses and replace other services, is it still the same application? What if you replace all the services and the software now does something completely different? Somewhere along that path, you will decide that it is not the same application. This means that aside from the availability of class libraries, the soul of the software can be represented by your ActionDocument; in other words, the ActionDocument becomes the Application.

Application Du Jour, Circa 2025 (a.k.a. today)
Sometimes it happens that a thing can be used in such a way that it is barely recognizable as being related to the original (florescent cheeze on a puffy snack comes to mind). These days, virtualization is all the rage; imagine having a number of virtual machines that are running your favorite Java Server that has deployed a Java application of your own design. Now imagine that you have cleverly designed the application to function as a Hyper-Dynamic Surinam management shell that uses a Service Block Commander that wraps a Service Block, making the whole thing manageable via Action Documents with all the business logic deployed as services. Forget about using Action Documents for simple upgrades; now from a master console, you could choose a machine to apply an Action Document to that represents an entire application from a catalog of applications kept in a persistent store... et voilĂ . What you have is the software equivalent of SeaMonkeys or microwave popcorn, just add some virtual computing power and watch as your server farm changes applications on the fly (true on-demand applications).


Copyright (2008) Samuel Provencher, All rights reserved