Monday, October 18, 2010

mShop - Screenshots

It's been a while (again) since I provided you with information about the current state of affairs concerning the mshop implementation. Today I would like to present some screenshots and detailed information about how I implemented the features I presented in my last post (eg. object class hierarchy definition and alike).

Creating attribute types

Object class definitions within the application consist of an unique name and a set of attributes. Each ettribute has its own value type like STRING, INTEGER or TIMESTAMP. Assigning these atomic values types directly would strip down the possibility to have a differentiated handling per attribute. Take an object class person having attributes firstname and lastname both of type STRING for example. When an user approves the order of an instance of type person the application could not handle the values of firstname and lastname in different ways but as every STRING in the context. The reason is that attribute names are just free text values having no impact on the processing.

In order to remove that limitation, mshop introduces own attribute types where each type is assigned a concrete value type like STRING or INTEGER. Take the example above. We would create an attribute type FIRSTNAME of value type STRING and maybe a type BIRTHDAY of value type TIMESTAMP. This gives the user the option to modify the handling of approved instances according the attribute types, eg combining the values of types FIRSTNAME and LASTNAME to a value givenName that is provided to a ldap server.




Creating object class orders

The driving force behind mshop are user defined object classes which serve as blueprints for concrete object instances (eg. class com.mnxfst.blog.Person is the defining type for instance Christian Kreutzfeldt). Therefore you need to define classes before your users are able to order instances of those types.

Creating an object class requires you to provide some more detailed information and must be carried out with a certain amount of care since it is the foundation of the later running system.

The definition dialog has three different tabs. The first tab lets you provide common information about the class like its name, if it is an abstract class, the activation date or the set of attributes.





The second tab lets you specify detailed information on how certain object class instances will behave in the approval workflow. You can define for each configuration (attribute value setting) the required audit level (how many people must approve the order) and what kind of operations are allowed for specific value settings.

The attached screenshot shows a configuration where object instances of the ordered type which are referencing the city of Hamburg and the specific street Kehrwieder need to be approved by two people for the operations create, update and delete.

These settings alone do not configure the concrete workflow path afterwards but represent the building bricks for it. In case an user orders an instance of type Person the application checks the attribute value settings and matches them against the associated workflow configuration in order to look up the required audit level.



The third tab finally lets you define approvers for ordered object class instances. Compared to the second tab, this one behaves nearly the same but adds another field: priority level. The audit level defines on which approval level the named user is provided with an workflow item for a specific order. The priority level gives you the opportunity to define proxy rules in case a named approver does not answer within a defined timespan.



Approving object class order

After having fully specified an object class, it needs to be approved by at least one object class approver. The object class approver opens the dialog displayed below and chooses to view all open/unanswered object class orders. In order to have the chance to validate the object class configuration he is allowed to browse through all the settings the originator made. The handling and behavior is quite straight forward and needs only little description.



Order object class instance

In case an object class has been approved its available to all users which are allowed to order instances of that type. The screenshot down below shows a set of classes the user can choose from.



The next step shows a set of available parent classes the user can choose from to define a hierarchy.



The third step lets the user specify values for the assigned attributes. The last step finally sends the object class instance order into the workflow process.



Approve order object class instance

The final view I would like to present today is the dialog for approving object class instance orders. Each approver is presented a list of all orders that he is allowed to approve / reject. By selecting a single workflow item he is able to see all required order details like common information (audit level, priority level, object class name and alike) and the attribute value settings.





Actually most of the dialogs do not look very fancy and colorful but I will try to improve on that until the first release of the application.

Wednesday, June 30, 2010

mShop - The object classes

It's been a while since I wrote my last post about the mshop application. Today I would like to present the
data structure which is visible to the application user and offers him the ability to model his problem domain
as best as possible.

As I have stated in my last article, most applications leave you with a rather inflexible data structure which
requires you either to adopt your problem domain to the software system or modify the software system in its
core parts to support your problem domain. Neither way is an acceptable solution to your problem.

When I started to design the mshop application, I had in mind that establishing a kind of a centralized order process
within a company must adhere to the fact that the users are not willing to use one service for ordering objects
of type A (like user accounts) and another service for ordering objects of type B (like computers or other office
supplies). Beside that the whole workflow based approval process needs to be independent from the concrete object
types handled by the system.

Therefore I tried to define a very generic data structure and a highly flexible approval engine. Both elements
will be presented more detailed in the following paragraphs.

Data structure


The data structure basically follows the object oriented design techniques used for software component specification. The following types are provided


  • attributes

  • attribute types (like integer, date ...)

  • classes

  • objects (as class instances)



A class is composed of an arbitrary number of attributes which have a specific type which could be a


  • INTEGER

  • DECIMAL

  • STRING

  • TIMESTAMP

  • BOOLEAN

  • BLOB

  • OBJECT_REFERENCE

  • CUSTOM_TABLE_REFERENCE



Most of these types are self-explanatory except OBJECT_REFERENCE and CUSTOM_TABLE_REFERENCE. An attribute
of type OBJECT_REFERENCE is allowed not to hold a value but a reference to another object (eg. a user references
another user to model the supervisor relationship). This gives the user the maximum power to model complex relationships.

The other special type (CUSTOM_TABLE_REFERENCE) has been implemented but is not in use. The idea behind this attribute type is the ability to reference objects or values that are completely unknown to the application and must not be modeled
within the domain. Actually this is just an extension point that might be used in the future.

When an attribute is assigned to a class, the user needs to specify some attribute values of this relationship


  • NAME - name of the attribute (eg. firstname, lastname, email, ...)

  • DESCRIPTION

  • MIN_TIMES - defines how many values must be assigned to this at minimum (used for list modeling)

  • MAX_TIMES - defines how many values can be assigned to this at maximum (used for list modeling)

  • REQUIRED - a value for this attribute must be provided

  • VISIBILITY - defines the visibility of this attribute (public, protected, private - read more down below)

  • ATTRIBUTE_TYPE - defines the attribute type (eg. INTEGER, DECIMAL, STRING, ...)



The VISIBILITY of an attribute defines the scope within the attribute is visible to any accessor. If the attribute
is marked to be PRIVATE only the class or instance is allowed to read and write values from / to it. If the attribute is
marked to be PROTECTED the class or instance and all ancestors and its instances of the defining class are allowed
to read and write values from / to it. If the attribute is marked to be PUBLIC all classes and instances are allowed to
read and write values from / to it.

Although there are no limitations concerning the depth of a class hierarchy and the number of attributes within it, the user
must be aware that each new level has an impact on the overall performance of the current hierarchy.

After having defined the classes the application allows to create instances (objects) from these blueprints. Therefore the application reads the class definition (including all ancestors) and provides the user with a dialog that requires him to provide values for all attributes visible to him.

Approval engine

The job of the approval engine is to analyze incoming object orders using a XPath like expression language and forward them to a suitable set of approvers (might be a single one as well). If a required number of users have approved the order it will be either forwarded for final processing (like creating user accounts or ordering printers) or - if required - forwarded to another set of approvers (two men rule). This depends on approval information provided for the underlying class.

Each class can have one or a whole set of approval information where each element defines how entities must be handled that apply to the given path expressions:


  • OBJECT_CLASS - defines the class that this element provides approval information for

  • AUDIT_LEVEL - defines the audit level (1..n, two men rule)

  • DISABLED - the element will not be used by the approval engine

  • ORDER_CREATE_ALLOWED - if an object applies to the given path expressions, it might be created

  • ORDER_UPDATE_ALLOWED - if an object applies to the given path expressions, it might be updated

  • ORDER_DELETE_ALLOWED - if an object applies to the given path expressions, it might be deleted

  • PATH_EXPRESSIONS - set of strings holding path expressions. if any object positively evaluates against all of these expressions, the options mentioned above will be applied



The approval engine reads the type of an incoming object order and fetches all approval information entities for that type. All path expressions of each entity are evaluated against the object. If any of them fits, the rules defined by the associated approval information entity will be used by the engine on how to further process the object order (eg. which audit level it requires, if an object might be
created at all).

Now the engine finally needs to identify all possible approvers for the ordered object. Therefore a quite similar information entity is used. If differs from the one above only by adding the following attribute


  • PRIORITY_LEVEL


The PRIORITY_LEVEL is used by the approval engine in case the current set of approvers did not respond within a given timespan. Now the engine needs to forward the order element to a second/third/fourth ... round of approvers where the priority level identifies the specific round when an approver enters the ring.

Before the engine forwards an order to an approver it uses the path expression set to evaluate if the ordered object can be handled by a specific approver. If the path expressions are all valid, the engine checks if the operation (create, update, delete) might be carried out the by approver at all.

Conclusion

Although I just provided you with only a small view, I guess you got a good hint on how flexible the application's data structure is. In the upcoming articles I will talk a bit about the license model of mshop and provide you with some screenshots and more information about key features.

Wednesday, April 21, 2010

mShop - an introduction

Today I would like to start a little series of articles about a personal software project called mShop. The mShop application is a response to a discussion I had with some friends a while ago about the handling of internal orders for office supplies and other items you need to carry out your daily work, eg. printer paper, pencils, desktop computers, user accounts or permissions. The key question was if there are any professional tools to comply with the following requirements:


  • define products having arbirary attributes during runtime

  • flexible delegation of responsibilites concerning the approval of product orders

  • automatic delegation of approved order items into sub-systems for further processing, eg. unix server for user account creation.

  • being cheap within boundaries



We came to the conclusion that there are indeed sophisticated tools like ActiveEntry or one might use an existing open source shopping platform and modify it according to his needs. BUT, the solutions are either expensive, limited in their feature set or require the owner to use their inflexible data structures.

That finally let to my decision to specify and implement an application that is compatible with the features mentioned above. According to my nick mnxfst I took the first char (m) of it and added the postfix shop.

To keep the articles short, I split them up into a series where each entry concentrates on a different key feature of mShop. The next article will discuss the definition of objects users might order via the web shop.

Sunday, March 21, 2010

FetchType.EAGER - Cannot simultaneously fetch multiple bags

I just came across a quite interesting problem that might give you a hint on how hibernate internally works. Assume you have the following object association:


  • A references B

  • A has a one-to-many relationship with C and the fetch type is set to FetchType.EAGER

  • B has a one-to-many relationship with D and the fetch type is set to FetchType.EAGER



If you try to initialize hibernate using such a layout, you will receive an error message saying something like cannot simultaneously fetch multiple bags.

Since my mappings were okay until I added a test case for B and the relationships of B were correct on first sight, I started to search for a possible reason. I came across different solutions, among them I found this one from Eyal Lupu. It did not directly solve my problem, but let me analyze the code from a different point of view.

Eyal writes, that it is not allowed to have two mapped lists in one entity being marked with FetchType.EAGER because the sql statements generated would lead to an incorrect loading of the desired entity. If you look at my example above, there is just one list per entity to be loaded eagerly, so why does it fail? The solution to that problem is the first statement of my example: A references B.

If you refer from one to another entity without further fetch type definition, hibernate will always apply FetchType.EAGER. That again leads to a join operation between the relations that contain A and B. Since the relationships between A - C and B - D are also marked to be loaded eagerly, they will be incorporated into the sql statement creation as well. Guess what operation will be used to compute the entities of the relationship: JOIN. And that leads us back to Eyals posting.

Eyal suggests to add an @IndexColumn annotation which replaces the bag semantics with list semantics that gives hibernate a hint on how to handle elements of that type. Another solution - and that's the one I chose - is to mark the connection between A and B as begin fetched lazy. In case you need the relationship between A and B being resolved on entity loading, a third solution would be to mark one or both of the one-to-many relationships as being fetched lazy.

Next to the FetchType.EAGER problem it is a good idea to have some of the relationships mentioned in the example above marked as being loaded lazy. Loading all associated objects eagerly and having a relationship that for example represent a chain of ancestors would leave you with a large set of objects loaded although you need just only one.

Monday, March 8, 2010

Object change tracking

Assume you have the task to write an audit proof software that could be easily resetted into a previous, fully
operable state for all objects managed by the application. I am going to describe the approach how I solved the
issue for my assignment.

The reason for resetting a software system into a previous state is for example the ability to trace back
changes made at a specific time by a know or unknown user. Usually features like this are implemented through a
logging component having a more or less high resolution. Unfortunately the information captured only give a very
limited view on the context the changes were made in. Writing all required information into a single log entry
would be a quick but brute force solution and would let appear the log to be unreadable to auditors.

A second aspect - and that's the driving force beside the traceability of changes in my project - is the
possibility to have an automatic replay of all changes made to objects between two defined timestamps and the
provisioning of these modifications into attached sub-systems. These tasks cannot be achieved through a simple
logging mechanism since next to the data directly involded into certain actions, transitively affected information
within the context of these objects are required as well. Therefore a more complex and more complete solution
must be found which still is managable on all system levels - from abstract ui level to low database level.

A very common approach for recording changes within datasets is to save delta information representing the
transition between two states instead of creating a copy of the original state adding the changes made by the
process. The latter would lead in my case where I comprehend the whole object context as dataset to a redundant
enlargement of the whole database.

To avoid redundant information and an explosion of managed data objects, I decided to follow the idea of saving
only delta information covering the named dataset.

Within in my project, I do have to manage a lot of objects which are linked among each other and where changing a
linked objects attribute value could lead automatically to a semantic change of the originating object. Since the
changed object could be linked by n other objects, a complete and stable approach in saving the state
transition would be the storage of the changed object as well as all linking objects. This would not be as many
data as writing a whole copy of the objects context but it produces unecessary redundant information.

I decided to follow a much easier and quite simple way to solve this problem. Whenever an object changes, itself
and its ancestors can be identified for any time through the unique object id and a version stamp. All
links between objects are designed such that they reference the unique object id as well as the version stamp.

If someone changes an object, a new version is created whereas all existing links point to the version controlled
ancestor. Other objects that will be linked with the changed one in the future only see the most current version.
This leads to an overall consistent state.

But this concept is too basic to intercept all requirements. Sometimes changes to object attribute values are of
such a minor state that they would have no influence on the link semantics at all. Therefore it must be possible
to mark changes in such way that no version is created but the most current one is modified.

There is indeed the other hand extreme as well. Sometimes changes made to objects have such an impact that they
must be expanded to all linked objects too. In this case all links to former versions are upgraded to the most
current one.

Sunday, January 10, 2010

Portlet Development With Spring 3.0

Portlet Development With Spring 3.0

Some weeks ago I started planning a dedicated online community backed by a Liferay portal instance. Up to now I spent most of my time defining use case scenarios, evaluating software tools and convincing people from my idea. Lately I began to write smaller portlets that provide basic features which are not included in Liferay and by the way give me a deeper insight into portlet development using Liferay.

I will write more about that platform in one of the upcoming posts but for now I would like to report about my odyssey collecting information on how to develop JSR 286 portlets supported by the Spring 3.0 framework in a Liferay environment.

If you want to develop portlet for any version of the Liferay portlet, you might use the ext environment provided by the Liferay team. Although I appreciate the help to develop applications for their portal, I definately don't want to learn how to use a proprietary development environment but use my knowledge about standard portlet development and get onto the road quickly.

I searched a while until I came across some expedient documentation about how to integrate Spring 3 and standard portlets (JSR 168 and 286). The Spring framework provides you with a dispatcher portlet that maps incoming portlet requests to registered handlers.

After having assembled all required information, I started developing my first Liferay portlet. The following steps are the ones I carried out (abbreviated):

The first step creates a directory layout as follows:


/myPortlet
/myPortlet/src/
/myPortlet/src/main
/myPortlet/src/main/java
/myPortlet/src/main/java/sourceCodeGoesHere
/myPortlet/src/main/webapp/
/myPortlet/src/main/webapp/WEB-INF/
/myPortlet/src/main/webapp/WEB-INF/classes/


The next step is to add configuration files required by Liferay:


/myPortlet/src/main/webapp/WEB-INF/liferay-display.xml
/myPortlet/src/main/webapp/WEB-INF/liferay-portlet.xml
/myPortlet/src/main/webapp/WEB-INF/portlet.xml
/myPortlet/src/main/webapp/WEB-INF/web.xml


  • The liferay-display.xml configuration file defines the category the user can find the tool in when adding a new applicaton to a portal page (see Google for more information).

  • If you want to control the access to the portlet, you need to modify liferay-portlet.xml which holds the names of the role that are allowed to use the portlet (see Google for more information).

  • The file portlet.xml contains the core definition of you portlet: display name, class, init parameters and alike. Normally you would provide the name of your custom portlet class but since we are using Spring, we must tell the portlet container to forward all incoming portlet requests to the Spring dispatcher portlet. Additionally we need to give the path of the Spring context configuration file.:

    <portlet-app>
    .
    .
    <portlet-name>mySamplePortlet</portlet-name>
    <display-name>My Sample Portlet</display-name>
    <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class>
    <init-param>
    <name>contextConfigLocation</name>
    <value>/WEB-INF/classes/my-sample-portlet-context.xml</value>
    </init-param>
    .
    .
    </portlet-app>


  • At last we have the web.xml file. Internally the Spring framework handles the whole custom portlet as web application with the dispatcher portlet as entry gate. Thus the web.xml configures the custom portal application like in any other Spring application: it needs a path to the Spring context definition file, names a set of listeners and defines how to handle incoming requests (-> ViewRendererServlet).

    <web-app ...>
    .
    .
    <context-param>
    <param-name>webAppRootKey</param-name>
    <param-value>mySamplePortletportlet</param-value>
    </context-param>
    <context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>/WEB-INF/classes/my-sample-portlet-context.xml</param-value>
    </context-param>
    <listener>
    <listener-class>org.springframework.web.util.WebAppRootListener</listener-class>
    </listener>
    <listener>
    <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>
    <servlet>
    <servlet-name>ViewRendererServlet</servlet-name>
    <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class>
    <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
    <servlet-name>ViewRendererServlet</servlet-name>
    <url-pattern>/WEB-INF/servlet/view</url-pattern>
    </servlet-mapping>
    .
    .
    </web-app>



Now I need to implement a controller for handling incoming requests. Since I am mapping incoming requests to a common Spring MVC application, the implementation of a request controller takes the same steps as for a standalone web application. See Google for extended information on how to implement Spring MVC controllers, define request, action or render mappings and how to identify incoming request parameters.

I omit the source code and continue with the definition of the Spring context. As defined in the web.xml and the portlet.xml I create a new file in /WEB-INF/classes/ named my-sample-portlet-context.xml. In order to the get application running the following beans must be defined at minimum:


<bean id="mySamplePortletViewController" class="com.me.MySamplePortletViewController">
<property name="debugMode" value="true"/>
</bean>

<bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="cache" value="false" />
<property name="viewClass" value="org.springframework.web.servlet.view.JstlView" />
<property name="prefix" value="/" />
<property name="suffix" value=".jsp" />
</bean>

<bean class="org.springframework.web.portlet.mvc.annotation.DefaultAnnotationHandlerMapping">
<property name="interceptors">
<bean class="org.springframework.web.portlet.handler.ParameterMappingInterceptor"/>
</property>
</bean>


The first bean names the controller which should receive incoming requests (as defined by the RequestMapping annotations in the source code). The second bean defines the type of view used for displaying contents to the user. In this case I use JSP backed by JSTL. The last bean is the one that handles the mappings defined in the source code of the controller implementation. The provided interceptor implementation makes sure that the incoming request parameters are mapped correctly to method parameters.

Finally you need to create and implement the file (JSP) referenced in the controller source code as destination used for displaying information to the user. That's all. ... well not everything ... there are some pitfalls I stumpled upon that I would like to write about so that you are not going to make the same mistakes as I did.

RequestMappings
The JSR 168 standard only knew about request mappings, whereas JSR 286 portlets differentiate incoming requests by the phase they originate from and which purpose they serve:

  • Action Request (Annotation: ActionMapping)

  • Render Request (Annotation: RenderMapping)

  • Even Request (Annotation: EventMapping)

  • Resource Request (Annotation: ResourceMapping)


If you want to learn about these types, please check out Google for further information. I urgently advise you to use these specialised annotations although Spring lets you use the generalized RequestMapping annotation. First of all it helps to make the source code more readable, second: you know which method handles which request type, confusions excluded!

RenderMapping requires String result
As mentioned above, after processing an incoming request, the portlet controller sends the response to a defined view page. The name of that page is provided by the result of the method that handles render requests -> the method must return a string.
If it returns 'view', the user will be redirected to 'view.jsp'.

Unit testing
If you want to test your portlet controller, check out the test cases for the Spring framework itself. Since the usage of init methods is quite common, you should be aware that the init method of your controller is executed before the setUp method within a JUnit test case. So, if you plan to access data in your init method which is created by the setUp method, you need to explicitly call the init method in your test method.