Assistance with Open Source adoption


Service Builder 6.2 Migration

Liferay - Thu, 02/23/2017 - 22:50

I'm taking a short hiatus from the design pattern series to cover a topic I've heard a lot of questions on lately - migrating 6.2 Service Builder wars to Liferay 7 CE / Liferay DXP.

Basically it seems you have two choices:

  1. You can keep the Service Builder implementation in a portlet war. Any wars you keep going forward will have access to the service layer, but can you access the services from other OSGi components?
  2. You take the Service Builder code out into an OSGi module. With this path you'll be able to access the services from other OSGi modules, but will the services be available to the legacy portlet wars?

So it's that mixed usage that leads to the questions. I mean, if all you have is either legacy wars or pure OSGi modules, the decision is easy - stick with what you've got.

But when you are in mixed modes, how do you deliver your Service Builder code so both sides will be happy?

The Scenario

So we're going to work from the following starting point. We have a 6.2 Service Builder portlet war following a recommendation that I frequently give, the war has only the Service Builder implementation in it and nothing else, no other portlets. I often recommend this as it gives you a working Service Builder implementation and no pollution from Spring or other libraries that can sometimes conflict with Service Builder. We'll also have a separate portlet war that leverages the Service Builder service.

Nothing fancy for the code, the SB layer has a simple entity, Course, and the portlet war will be a legacy Liferay MVC portlet that lists the courses.

We're tasked with upgrading our code to Liferay 7 CE or Liferay DXP (pick your poison ), and as part of the upgrade we will have a new OSGi portlet component using the new Liferay MVC framework for adding a course.

To reduce our development time, we will upgrade our course list portlet to be compatible with Liferay 7 CE / Liferay DXP but keep it as a portlet war - basically the minimal effort needed to get it upgraded. We'll also have the new portlet module for adding a course.

But our big development focus, and the focus of this blog, will be choosing the right path for upgrading that Service Builder portlet war.

For evaluation purposes we're going to have to upgrade the SDK to a Liferay Workspace. Doing so will help get us some working 7.x portlet wars initially, and then when it comes time to do the testing for the module it should be easy to migrate.

Upgrading to a Liferay Workspace

So the Liferay IDE version 3.1 Milestone 2 is available, and it has the Code Upgrade Assistant to help take our SDK project and migrate it to a Liferay Workspace.

For this project, I've made the original 6.2 SDK project available at

You can find an intro to the upgrade assistant in Greg Amerson's blog: and Andy Wu's blog:

It is still a milestone release so it is still a work in progress, but it does work on upgrading my sample SDK. Just a note, though, it does take some processing time during the initial upgrade to a workspace; if you think it has locked up or is unresponsive, just have patience. It will come back, it will complete, you just have to give it time to do it's job.


After you finish the upgrade, you should have a Liferay workspace w/ a plugins-sdk directory and inside there is the normal SDK directory structure. In the portlet directory the two portlet war projects are there and they are ready for deployment.

In fact, in the plugins-sdk/dist directory you should find both of the wars just waiting to be deployed. Deploy them to your new Liferay 7 CE or Liferay DXP environment, then spin out and drop the Course List portlet on a page and you should see the same result as the 6.2 version.

So what have we done so far? We upgraded our SDK to a Liferay Workspace and the Code Upgrade Assistant has upgraded our code to be ready for Liferay 7 CE / Liferay DXP. The two portlet wars were upgraded and built. When we deployed them to Liferay, the WAR -> WAB conversion process converted our old wars into OSGi bundles.

However, if you go into the Gogo shell and start digging around, you won't find the services defined from our Service Builder portlet. Obviously they are there because the Course List portlet uses it to get the list of courses.

War-Based Service Builder

So how do these war-based Service Builder upgrades work? If you take a look at the CourseLocalServiceUtil's getService() method, you'll see that it uses the good ole' PortletBeanLocator and the registered Spring beans for the Service Builder implementation. The Util classes use the PortletBeanLocator to find the service implementations and may leverage the class loader proxies (CLP) if necessary to access the Spring beans from other contexts. From the service war perspective, it's going through Liferay's Spring bean registry to get access to the service implementations.

Long story short, our service jar is still a service jar. It is not a proper OSGi module and cannot be deployed as one. But the question is, can we still use it?

OSGi Add Course Portlet

So we need an OSGi portlet to add courses. Again this will be another simple portlet to show a form and process the submit. Creating the module is pretty straight forward, the challenge of course is including the service jar into the bundle.

First thing that is necessary is to include the jar into the build.gradle dependencies. Since it is not in a Maven-like repository, we'll need to use a slightly different syntax to include the jar:

dependencies { compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0" compileOnly group: "com.liferay.portal", name: "com.liferay.util.taglib", version: "2.0.0" compileOnly group: "javax.portlet", name: "portlet-api", version: "2.0" compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1" compileOnly group: "jstl", name: "jstl", version: "1.2" compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0" compile files('../../plugins-sdk/portlets/school-portlet/docroot/WEB-INF/lib/school-portlet-service.jar') }

The last line is the key; it is the syntax for including a local jar file, and in our case we're pointing at the service jar which is part of the plugins-sdk folder that we upgraded.

Additionally we need to add the stanza to the bnd.bnd file so the jar gets included into the bundle during the build:

Bundle-ClassPath:\ .,\ lib/school-portlet-service.jar -includeresource:\ lib/school-portlet-service.jar=school-portlet-service.jar

As you'll remember from my blog post on OSGi Module Dependencies, this is option #4 to include the jar into the bundle itself and use it in the classpath for the bundle.

Now if you build and deploy this module, you can place the portlet on a page and start adding courses.  It works!

By including the service jar into the module, we are leveraging the same PortletBeanLocator logic used in the Util class to get access to the service layer and invoke services via the static Util classes.

Now that we know that this is possible (we'll discuss whether to do it this way in the conclusion), let's now rework everything to move the Service Builder code into a set of standard OSGi modules.

Migrating Service Builder War to Bundle

Our service builder code has already been upgraded when we upgraded the SDK, so all we need to do here is create the modules and then move the code.

Creating the Clean Modules

First step is to create a clean project in our Liferay workspace, a foundation for the Service Builder modules to build from.

Once again, I start with Blade since I'm an Intellij developer. In modules directory, we'll let Blade create our Service Builder projects:

blade create -t service-builder -p school

For the last argument, use something that reflects your current Service Builder project name.

This is the clean project, so let's start dirtying it up a bit.

Copy your legacy service.xml to the school/school-service directory.

Build the initial Service Builder code from the service XML. If you're on the command line, you'd do:

../../gradlew buildService

Now we have unmodified, generated code. Layer in the changes from the legacy Service Builder portlet, including:

  • portlet-model-hints.xml
  • Changes to any of the META-INF/spring xml files
  • All of your Impl java classes

Rebuild services again to get the working module code.

Module-Based Service Builder

So we reviewed how the CourseLocalServiceUtil's getService() method in the war-based service jar leveraged the PortletBeanLocator to find the Spring bean registered with Liferay to get the implementation class.

In our OSGi module-based version, the CourseLocalServiceUtil's getService() method is instead using an OSGi ServiceTracker to get access to the DS components registered in OSGi for the implementation class.

Again the service "jar" is still a service jar (well, module), but we also know that the add course portlet will be able to leverage the service (with some modifications), the question of course is whether we can also use the service API module in our legacy course list portlet.

Fixing the Course List Portlet War

So what remains is modifying the course list portlet so it can leverage the API module in lieu of the legacy Service Builder portlet service jar.

This change is actually quite easy...

The file changed from the upgrade assistant contains the following:

required-deployment-contexts=\ school-portlet

This is the line used by the Liferay IDE to inject the service jar so the service will be available to the portlet war. We need to edit this line to strip out these two lines since we're not using the deployment context.

If you have the school-portlet-service.jar file in docroot/WEB-INF/lib, go ahead and delete that file since it is no longer necessary.

Next comes the messy part; we need to copy in the API jar into the course list portlet's WEB-INF/lib directory. We have to do this so Eclipse will be happy and will be able to happily compile all of our code that uses the API. There's no easy way to do this, but I can think of the following options:

  1. Manually copy the API jar over.
  2. Modify the Gradle build scripts to add support for the install of artifacts into the local Maven repo, then the Ivy configuration for the project can be adjusted to include the dependency. Not as messy as a manual file copy, but involves doing the install of the API jar so Ivy can find it.

We're not done there... We actually cannot keep the jar in WEB-INF/lib otherwise at runtime you get class cast exceptions, so we need to exclude it during deployment. This is easily handled, however, by adding an exclusion to your file:

module.framework.web.generator.excluded.paths=<CURRENT EXCLUSIONS>,\ WEB-INF/lib/

When the WAR->WAB conversion is taking place, it will exclude this jar from being included. So you get to keep it in the project and let the WAB conversion strip it out during deployment.

Remember to keep all of the current excluded paths in the list, you can find them in the file included in your Liferay source.

Build and deploy your new war and it should access the OSGi-based service API module.


Well, this ended up being a mixed bag...

On one hand I've shown that you can use the Service Builder portlet's service jar as a direct dependency in the module and it can invoke the service through the static Util classes defined within. The advantage of sticking with this path is that it really doesn't require much modification from your legacy code beyond completing the code upgrade, and the Liferay IDE's Code Upgrade Assistant gets you most of the way there. The obvious disadvantage is that you're now adding a dependency to the modules that need to invoke the service layer and the deployed modules include the service jar; so if you change the service layer, you're going to have to rebuild and redeploy all modules that have the service jar as an embedded dependency.

On the other hand I've shown that the migrated OSGi Service Builder modules can be used to eliminate all of the service jar replication and redeployment pain, but the hoops you have to jump through for the legacy portlet access to the services are a development-time pain.

It seems clear, at least to me, that the second option is the best. Sure you will incur some development-time pain to copy service API jars if only to keep the java compiler happy when compiling code, but it definitely has the least impact when it comes to service API modifications.

So my recommendations for migrating your 6.2 Service Builder implementations to Liferay 7 CE / Liferay DXP are:

  • Use the Liferay IDE's Code Upgrade Assistant to help migrate your code to be 7-compatible.
  • Move the Service Builder code to OSGi modules.
  • Add the API jars to the legacy portlet's WEB-INF/lib directory for those portlets which will be consuming the services.
  • Add the module.framework.web.generator.excluded.paths entry to your to strip the jar during WAR->WAB conversion.

If you follow these recommendations your legacy portlet wars will be able to leverage the services, any new OSGi-based portlets (or JSP fragments or ...) will be able to access the services, and your deployment impact for changes will be minimized.

My code for all of this is available in github:

Note that the upgraded code is actually in the same repo, they are just in different branches.

Good Luck!


After thinking about this some more, there's actually another path that I did not consider...

For the Service Builder portlet service jar, I indicated you'd need to include this as a dependency on every module that needed to use the service, but I neglected to consider the global service jar option that we used for Liferay 6.x...

So you can keep the Service Builder implementation in the portlet, but move the service jar to the global class loader (Tomcat's lib/ext directory). Remember that with this option there can only be one service jar, the global one, so no other portlet war nor module (including the Service Builder portlet war) can have a service jar. Also remember that to update a global service jar, you can only do this while Tomcat is down.

The final step is to add the packages for the service interfaces to the module.framework.system.packages.extra property in You want to add the packages to the current list defined in, not replace the list with just your service packages.

Before starting Tomcat, you'll want to add the exception, model and service trio to the list. For the school service example, this would be something like:

module.framework.system.packages.extra=\ <ALL DEFAULT VALUES COPIED IN>,\,\,\

This will make the contents of the packages available to the OSGi global class loader so, whether bundle or WAB, they will all have access to the interfaces and static classes.

This has a little bit of a deployment process change to go with it, but you might consider this the least impactful change of all. We tend to frown on the use of the global class loader because it may introduce transitive dependencies and does not support hot deployable updates, but this option might be lower development cost to offset the concern.

David H Nebinger 2017-02-24T03:50:13Z
Categories: CMS, ECM

Liferay OSGi注解(Annotation) - 使用手册(译文)

Liferay - Tue, 02/21/2017 - 00:50
原文:   原文作者: DAVID H NEBINGER     当你查看Liferay 7 CE/Liferay DXP源码时,你会看到大量不同的注解。当你除此看到这些注解的时候,可能会感觉到有些无所适从。所以我想写一些引用指导来解释这些注解是什么,并且在你自己的OSGi代码中什么时候使用比较合适。   我们开始吧...   @Component   在OSGi世界中,这种注解用于定义“声明式服务(Declarative Service)”。声明式服务是OSGi中用于动态生成服务。它使容器中的大量的组件(component)间可以互相关联调用。   在Component注解中有三个组要属性:
  • immediate - 通常设置为true。用于使组件在部署后直接启动(start),不用等待其他引用或者使用懒加载(lazy startup)。
  • properties - 用设置一系列OSGi属性绑定到组件。这些属性对于当前组件是可见的,但是更重要的是它们对于其他组件也是可见的。这些组件可以帮助配置组件,也可以用于支持组件过滤。
  • service - 定义组件实现的服务。有时这个属性是可选的,但是通常是必要的,因为这样可以使组件要实现服务更明确。service的值通常是一个接口,但是也可以使用一个实体类。
什么时候需要使用@Component?只要需要在OSGi容器中使用要发布一个组件的时候,就可以使用。不是所有的类都需要是组件。只是当你需要将一个插件植入到Liferay环境中的时候才会声明一个组件。(例如,添加一个导航项、定义一个MVC command handler、覆写一个Liferay组件或者为自己的扩展框架写一个插件)。   @Reference   这个注解是和@Component相对应的。@Reference用于获取OSGi容器中其他的组件并且注入到你自己的组件中。需要注意的是,因为OSGi在做注入的工作,你只能注入OSGi组件类。@Reference注解会被非组件类忽略,并且也会组件的被子类忽略。对于所有的引用注入,都必须在@Component类中声明。 当你定义一个基类时注入了一些其他的服务,并且这个基类没有使用@Component注解(因为定义时还没有完成),@Reference注解会被非组件类忽略,这样的情况下,所有的注入都是无效的。最终,所有的setter方法和@Reference注解都需要复制到实习子类中。这样显得相当冗余。需要注意这一点。 可能@Reference最常见的属性是“unbind”属性了。在setter方法中,经常会看见@Reference(unbine="-")这样的写法。当你为setter方法使用@Reference时,OSGi使用使用这个setter方法作为组件的setter方法。unbind属性指定了当组件释放是,不需要调用任何其他的方法。也就是说,不需要为组件释放做任何的处理。多数情况下,这么做没有问题,服务器启动,OSGi绑定组件,并且使用组件一直到系统关闭。 另外一个属性是“target”,Target用于支持过滤。还记得@Component中的属性吧?通过使用target属性,可以通过特定查询语法指定一类组件。 例如: @Reference( target = "(" + NotificationsPortletKeys.NOTIFICATIONS + ")", unbind = "-" ) protected void setPanelApp(PanelApp panelApp) { _panelApp = panelApp; } 这段代码希望获取PanelApp组件的实例,但是它却指定了Notification portlet中的PanelApp组件。所以其他portlet中的PanelApp不符合条件,只有Notification portlet中的PanelApp才符合条件。 我们这里讨论的属性有时候会很重要,所以我会详细讨论这些属性。 Cardinality 第一个是cardinality属性,默认值是ReferenceCardinality.MANDITORY,可选值有:OPTIONAL、MULTIPLE和AT_LEAST_ONE。这些值的意义是:
  • MANDITORY - 引用必须在组件启动前可用并且注入。
  • OPTIONAL - 引用在组件启动阶段时不是必要的,在不指定组件的时候也是可以正常工作的。
  • MULTIPLE - 有多种不同的资源可以引用,组件会使用全部的方法。和OPTIONAL相似,在组件启动阶段,引用不是不要的。
  • AT_LEAST_ONE - 多种资源可以满足引用,组件会全部使用。但是在组件启动时,至少有一个资源是可用的。
Multiple选项可以让你通过引用,使用多种符合条件的方法调用。这种方式,只有在使用setter方法中@Reference注解,并且setter方法是向列表或者数组中添加内容的情况下才合理。或者可以使用ServiceTracker替换这种写法,这样你就不用亲自手动去管理列表了。   Opional可以使组件在启动时不必须指定某个引用。在有循环引用问题时,这种方法就会帮助你解决问题。例如:A引用B,B引用C,C引用A。如果这三个组件都设置引用为REQUIRED,所有组件都不会启动,因为它们都在等待其他组件满足条件(只有已经启动的组件才能指派给@Reference)。为引用使用optional选项就打破了这种循环,组件就可以正常启动,并且引用问题也解决了。   Policy 下一个重要的@Reference属性是policy。Policy的值可以是ReferencePolicy.STATIC (默认)或者ReferencePolicy.DYNAMIC。他们的意义是:
  • STATIC - 只有在已经指派了引用之后,组件才能启动。在启动之后,当出现其他可用引用时,组件也不会理会。
  • DYNAMIC - 无论有没有可用的引用,组件都会启动。并且在有新的可用引用的时候,组件会使用新的引用。
引用策略控制着在组件启动后,当新的引用资源可用的时候的使用策略。总体来说,使用STATIC,组件会忽略新的引用,使用DYNAMIC,当出现新的可用引用时,组件会改变引用。   PolicyOption 和policy一同使用的另外一个属性是policyOption。这个属性的值可以是ReferencePolicyOption.RELUCTANT (默认) 或者 ReferencePolicyOption.GREEDY。他们的意思是:
  • RELUCTANT - 对于单一(single)的引用,新的可用引用会被忽略。对于多项(multiple)引用,新的可用引用出现的时候会被绑定。
  • GREEDY - 只要新的可用引用出现的时候,组件就会绑定这些引用。.
  这些选项组合起来有很多种方式。 首先是默认的,ReferenceCardinality.MANDITORY + ReferencePolicy.STATIC + ReferencePolicyOption.RELUCTANT。 这样组合是组件一定要一个引用可用才能启动,并且忽略新的可用引用。这样的默认组和确保了组件的稳定性。 另外一种组合是ReferenceCardinality.OPTIONAL/MULTIPLE + ReferencePolicy.DYNAMIC + ReferencePolicyOption.GREEDY。 在这样的配置中,组件在缺少服务引用的时候也会正常使用,但是组件在使用过程中允许新的引用和引用变更,在新的引用可用时,组件会主动的绑定引用服务。 也会有其他的组合,但是在设置的时候需要了解这样的设置对组件的影响。毕竟在你声明引用的时候你要了解如何是组件工作。你需要考虑到组件在没有引用的时候如何响应,如果服务不可用的时候组,件是否需要停止工作。在你考虑问题的时候不信要考虑理想情况,也要考虑到特殊情况,例如重新部署、卸载、服务差距和组件的容错性。如果这些问题都能解决,那么组件就是比较理想的。 最后,回想一下,什么时候需要使用@Reference注解?就是你需要从OSGi环境中向你的组件注入服务的时候。这些服务可以是你自己的服务,也可以是OSGi容器中其他模块的服务。记住,@Reference只能在OSGi组件中使用,但是你可以使你的类通过@Component注解变成组件。   @BeanReference 这个是一个Liferay注解,用于向Spring bean中注入一个Liferay core中的引用。   @ServiceReference 这是一个Liferay注解,用于向组件中注入一个引用,这个引用是来自于Spring Extender module bean。   稍等!三种引用注解?我需要用哪一种? 我们来解释一下这三种引用注解。根据我的经验来判断,多数情况下,你需要使用@Reference注解。Liferay core Spring beans和Spring Extender module beans同样也是暴露在OSGi容器中的,所以@Reference在多数情况下是没问题的。 如果在使用@Reference的时候,服务没有被注入,并且返回null,这就意味着你可能需要使用其他的引用注解了。选择哪种注解原则不难:如果bean是在Liferay core中,就是用@BeanReference;反之,但是如果是在Spring Extender module中,就是用@ServiceReference注解。注意,无论是bean或者service注解都会要求你的组件使用Spring Extender。如何引用依赖,参考任何使用ServiceBuilder的服务模块,查看build.gradle和bnd.bnd,用同样的方法修改你自己的模块。   @Activate @Activate注解是对应Spring的InitializingBean接口。它声明了组件在启动是要调用的方法。在Liferay源码中,你会看到它会被三个主要的方法使用: @Activate protected void activate() { ... } @Activate protected void activate(Map<String, Object> properties) { ... } @Activate protected void activate(BundleContext bundleContext, Map<String, Object> properties) { ... } 也有其他的方法使用这个注解,只要在源码中搜索@Activate就可以找到很多不同的用例了。除了无参数的activate方法,其他的都依赖于OSGi所注入的值。注意properties映射实际是从你的OSGi的Configuration Admin服务取得的。   什么时候需要使用@Activate呢?只要你需要在组件启动后、被使用前需要做一些初始化操作的时候就可以使用。例如,我曾经设置Quartz计划任务,验证数据库实体的时候使用过。   @Deactivate @Dactivate注解是和@Ativate注解相反的,它定义了组件在停用是需要调用的方法。   @Modified @Modified注解定义了当组件被改变时需要调用的方法。特别是在标记@Reference了的方法被改变的时候。在Liferay源码中,@Modified注解经常是和标记@Activate的方法绑定在一起的,这样同一个方法就能同时处理启动和改变了。   @ProviderType @ProviderType是BND中使用的,通常是在考虑将head包含的复杂情况时使用。长话短说,@ProviderType是被BND使用,用来定义在实现类中可以指派的OSGi manifest的版本。并且它尝试限制组件版本范围的使用。   它的重点是用来确定接口的改变,通过为实现类限制的版本,会强制实现类来根据接口来进行更新。   什么时候使用@ProviderType呢?其实,并不一定需要使用。你可以看见,在ServiceBuilder生成的代码里面已经被使用了。我在这里提到这个注解的原因并不是因为你一定要使用它,而是满足你的好奇心。   @ImplementationClassName 这个是Liferay中为ServiceBuilder实体接口使用的注解。它定义了在service module中用来实现接口的类。 这个你并以需要使用,只是为了让你知道它是做什么的的。   @Transactional 这是另一个ServiceBuilder服务接口使用的注解。它定义了服务方法的事务需求。 这个方法在你开发的时候也用不到。   @Indexable @Indexable用来修饰会使搜索索引更新你的方法,特别是ServiceBuilder中的add、update和delete实体的方法。 你可以为你的实现增、删、改的方法使用@Indexable注解。如果实体关联 相关的实现方法,那么实体就可以被索引。   @SystemEvent @SystemEvent注解是在ServiceBuilder生成的代码中被可能生成系统事件的方法使用的。系统事件和staging、LAR和导入导出处理等相关。例如,当一个web conteng被删除,这个就会生成一个SystemEvent记录。在staging环境中,当“Publish to Live”原型是,删除SystemEvent确保了相关联的web content在live站点中也被删除。 需要什么时候使用@SystemEvent注解呢?说实话,我也不知道。在我10年的经验中,我从来没有需要生成SystemEvent记录的时候,也从来没更改过Liferay发布和LAR处理。如果任何人有相关经验使用@SystemEvent注解的话,我很愿意侧耳恭听。   @Meta OSGi有一个基于XML的系统用于为Configuration Admin定义配置详细信息。BND项目的通过使用@Meta注解,可以使BND根据配置接口中使用这个注解的方法生成配置文件。   重要提示:一定要在bnd.bnd文件中添加下行代码才能使用@Meta注解: -metatype: * 如果没添加的话,在生成XML配置文件的时候,是不会用到@Meta注解的。   @Meta.OCD 这个注解用于“Object Class Definition”方面的配置详细信息。这个注解用于为在接口层为类的定义提供id,名字和语言等详细信息。 什么时候使用这个注解呢?当为组件定义一个Configuration Admin接口时,并且这个接口是在Control Panel -> System Setting有一个配置选项的时候使用。   注意@Meta.OCD属性包含了本地化的设置。这样就可以使用resource bundle来翻译配置名称、详细配置信息和@ExtendedObjectClassDefinition分类了。   @Meta.AD 这个注解用于“Attribute Definition”,用于定义配置中表单项层面的定义。注解用于为表单项提供ID、名称、说明、默认值和其他详细信息。   什么时候使用这个注解?就是在需要为System Setting中的配置提供更多关于配置项目细节的时候使用。   @ExtendedObjectClassDefinition 这个是Liferay的注解,用于定义配置的类型(就是在System Setting中上面显示的那些分类)和配置的范围。   范围可以是下面的值:
  • SYSTEM - 整个系统中的全局配置,在生个系统中只有一个配置。
  • COMPANY - Company级,每个portal实例中只可以有一个配置。
  • GROUP - Group(站点)级,每个站点中只可以有一个配置。
  • PORTLET_INSTANCE - 这个和portlet实例参数类似,每个portlet实例中都会有一分配置。
什么时候要用到这个注解呢?每次用到@Meta.OCD注解的时候,都需要用到一次@ExtendedObjectClassDefinition注解来定义配置的分类。   @OSGiBeanProperties 这是一个Liferay的注解,用来定义OSGi组件属性,这些属性用来注册Spring bean为一个OSGi组件。你会在ServiceBuilder模块中见到这个注解被使用,将Spring bean暴露给OSGi容器。记着,ServiceBuilder依然是基于Spring的(SpringExtender),这个注解就是用来将Spring bean注册为OSGi组件用的。   什么时候会用到这个注解呢?如果你在你自己的模块中使用了Spring Extender来使用Spring,并且你想将Spring bean在OSGi容器中暴露为组件(这么做可以使其他模块使用这个Spring bean),这个时候你就需要使用这个注解了。   我没有在这里讨论更多的细节,因为这个注解可以在javadoc中被大量的找到。查看这个例子:   总结   这就是我在Liferay 7 CE / Liferay DXP中见到的所有注解了。希望这些信息可以帮到你的Liferay开发。如果你发现本文没有涉及的注解或者希望了解更多细节,请随时向原文博客或者在这里提问。 Neil Jin 2017-02-21T05:50:17Z
Categories: CMS, ECM

Farewell Juan!

Liferay - Fri, 02/17/2017 - 15:44
Farewell Juan!

Juan Gonzalez announced that today is his last day as a Liferay employee. While he’ll still be part of the Liferay community, I think this is a nice opportunity for us to thank him for his work at Liferay and to wish him well on his next endeavor.

After working with you for several years, I’m sad to see you go. You’re an incredibly passionate and hard worker. Thanks to your dedication, Liferay 7 correctly supports several servlet features (many of which were required for Liferay Faces to be compatible with Liferay 7). Thanks to your passion, Liferay Faces developers can access portal taglib features through tags like portal:inputSearch (not to mention all the Liferay Faces Portal bugs you’ve fixed). Thanks to your involment in the forums, countless people in the Liferay community have had their problems solved (you’ve written almost 3k posts and are user #4 overall).

Juan, the Liferay community, the Liferay Faces project, and Liferay itself would not be nearly as great without your contributions. And I’ve personally admired and sought to emulate your productivity, passion, and community involvement. Thanks for all your hard work!

Farewell Juan!

- Kyle

Kyle Joseph Stiemann 2017-02-17T20:44:59Z
Categories: CMS, ECM

Liferay Design Patterns - Multi-Scoped Data/Logic

Liferay - Fri, 02/17/2017 - 10:33
Pattern: Multi-Scoped Data/Logic Intent

The intent for this pattern is to support data/logic usage in multiple scopes. Liferay defines the scopes Global, Site and Page, but from a development perspective scope refers to Portal and individual OSGi Modules. Classic data access implementations do not support multi-scope access because of boundaries between the scopes.

The Multi-Scoped Data/Logic Liferay Design Pattern's intent is to define how data and logic can be designed to be accessable from all scopes in Liferay, either in the Portal layer or any other deployed OSGi Modules.

Also Known As

This pattern is implemented using the Liferay Service Builder tool.


Standard ORM tools provide access to data for servlet-based web applications, but they are not a good fit in the portal because of the barriers between modules in the form of class loader and other kinds of boundaries. If a design starts from a standard ORM solution, it will be restricted to a single development scope. Often this may seem acceptable for an initial design, but in the portal world most single-scoped solutions often need to be changed to support multiple scopes. As the standard tools have no support for multiple scopes, developers will need to hand code bridge logic to add multi-scope support, and any hand coding increases development time, bug potential, and time to market.

The motivation for Liferay's Service Builder tool is to provide an ORM-like tool with built-in support for multi-scoped data access and business logic sharing. The tool transforms an XML-based entity definition file into layered code to support multiple scopes and is used throughout business logic creation to add multi-scope exposure for the business logic methods.

Additionally the tool is the foundation for adding portal feature support to custom entities, including:

  • Auto-populated entity audit columns.
  • Asset framework support (comments, rankings, Asset Publisher support, etc).
  • Indexing and Search support.
  • Model listeners.
  • Workflow support.
  • Expando support.
  • Dynamic Query support.
  • Automagic JSON web service support.
  • Automagic SOAP web service support.

You're not going to get this kind of integration from your classic ORM tool...

And with Liferay 7 CE / Liferay DXP, additionally you also get an OSGi-compatible API and service bundle implementation ready for deployment.


IMHO Service Builder applies when you are dealing with any kind of multi-scoped data entities and/or business logic; it also applies if you need to add any of the indicated portal features to your implementation.


The participants in this pattern are:

  • An XML file defining the entities.
  • Spring configuration files.
  • Implementation class methods to add business logic.
  • Service consumers.

The participants are used by the Service Builder tool to generate code for the service implementation details.

Details for working with Service Builder are covered in the following sections:


ServiceBuilder uses the entity definition XML file to generate the bulk of the code. Custom business methods are added to the ServiceImpl and LocalServiceImpl classes for the custom entities and ServiceBuilder will include them in the service API.


By using Service Builder and generating entities, there is no real downside in the portal environment. Service Builder will generate an ORM layer and provide integration points for all of the core Liferay features.

There are three typical arguments used by architects and developers for not using Service Builder:

  • It is not a complete ORM. This is true, it does not support everything a full ORM does. It doesn't support Many To Many relationships and it also doesn't handle automatic parent-children relationships in One To Many. All that means is the code to handle many to many and even some one to many relationship handling will need to be hand-coded.
  • It still uses old XML files instead of newer Annotations. This is also true, but this is more a reflection of Liferay generating all of the code including the interfaces. With Liferay adding portal features based upon the XML definitions, using annotations would require Liferay to modify the annotated interface and cause circular change effects.
  • I already know how to develop using X, my project deadlines are too short to learn a new tool like Service Builder. Yes there is a learning curve with Service Builder, but this is nothing compared to the mountains of work it will take getting X working correctly in the portal and some Liferay features will just not be options for you without Service Builder's generated code.

All of these arguments are weak in light of what you get by using Service Builder.

Sample Usage

Service Builder is another case of Liferay eating it's own dogfood. The entire portal is based on Service Builder for all of the entities in all of the portlets, the Liferay entities, etc.

Check out any of the Liferay modules from simple cases like Bookmarks through more complicated cases such as Workflow or the Asset Publisher.


Service Builder is a must-use if you are going to do any integrated portal development. You can't build the portal features into your portlets without Service Builder usage.

Seriously. You have no other choice. And I'm not saying this because I'm a fanboy or anything, I'm coming from a place of experience. My first project on Liferay dealt with a number of portlets using a service layer; I knew Hibernate but didn't want to take time out to learn Service Builder. That was a terrible mistake on my part. I never did deal with the multi-scoping well at all, never got the kind of Liferay integration that would have been great to have. Fortunately it was not a big problem to have made such a mistake, but I learned from it and use Service Builder all the time now in the portal.

So I share this experience with you in hopes that you too can avoid the mistakes I made. Use Service Builder for your own good!

David H Nebinger 2017-02-17T15:33:26Z
Categories: CMS, ECM

Liferay IDE 3.1 Milestone 2 Released

Liferay - Thu, 02/16/2017 - 20:56

Hi all,

We are happy to say that we have the new milestone for Liferay IDE 3.1 released as described in section Updates and Feedback in Liferay IDE 3.1 Milestone 1 Released.

Go to to install the updatesite.

If you want to download full Eclipse Neon bundled with Liferay IDE, just go to this page.

The only one obvious feature in milestone is Maven Support In Code Update Tool.

You can put your 6.2 maven root project path in the second page in Code Update Tool, and click “Import Projects” button.

And there is a new page called “Upgrade POM Files” added as the third page. Also we moved the “Find Breaking Changes” page before “Update Descriptor Files” page.

In the third page, you can search all pom.xml files in current Eclipse workspace and preview the changes Code Upgrade Tool is going the make and then either upgrade them one by one or all at once.  Be sure to double click the file to see what changes are going to be made by the tool.  



There are few more features added.

Add ability to customize bundle URL when importing a Liferay Workspace Project

Deploy Error Tooltip on Liferay Server


When some project build error after some changes, hover on the server, you can see the detail information of the error.


Next Release

Our next release version will be 3.1 m3 and will come up with more helpful features, e.g. maven type liferay workspace support, new JSF portlet wizard, migrate legacy service builder project, source lookup fixing in  Liferay 7x Server Launches.

Please goto community forums if you have any questions. We will try our best to help.

Regards. Andy Wu 2017-02-17T01:56:14Z
Categories: CMS, ECM

Improving Test Performance on Liferay

Liferay - Wed, 02/15/2017 - 13:26
Improving Test Performance on Liferay

Recently, we upgraded the JSF Portlet Bridge Test Compatibility Kit (TCK) from Selenium 1.0 to our new test framework which uses Selenium 2.53. The TCK contains about 220 tests, and on Liferay 7.0 before we upgraded, it took around 8 minutes and 30 seconds to execute. In contrast, the TCK took 1 minute and 30 seconds to run on Apache Pluto, the portlet container reference implementation. In order to speed up test execution for Liferay Portal, we decided to try a few simple tricks and changes to reduce the total test time, and… we succeeded! Here’s a breakdown for how fast each browser ran the TCK on my machine before our upgrade, after our upgrade, and after we tweaked the TCK with some Liferay specific tricks:

Browser Selenium 1.0 Test Time Selenium 2.53 Test Time Selenium 2.53 w/Exclusive
Window State Test Time HtmlUnit* - - ~00:00:45 Chrome - ~00:04:30 ~00:01:30 Firefox ~00:08:30 (w/FF v21.0) ~00:06:00 (w/FF v46.0) ~00:02:30 JBrowser*
(experimental) - - ~00:02:30 PhantomJS* - ~00:16:15 ~00:10:30

* Headless browser.

As you can see, we made some pretty big improvements just by upgrading to a modern version of Selenium. Running tests with HtmlUnit also provided a nice boost over other browsers. Before I talk in-depth about HtmlUnit though, I want to begin by explaining the Liferay-specific tweaks and tricks that we used to speed up the TCK. These techniques are specific to Liferay but not to Selenium or HtmlUnit, so they may be useful in improving the performance of Liferay portlet tests regardless of the test framework or browsers used.

Exclusive Window State

The main performance boost is thanks to Liferay’s custom “exclusive” window state.1 In Liferay, any portlet can be accessed in the exclusive state with a render URL containing the portlet’s id (p_p_id) and p_p_state=exclusive:


Liferay’s exclusive window state feature was created when Liferay only supported Portlet 1.0, and it provides similar functionality to the RESOURCE_PHASE of the Portlet 2.0 lifecycle. When Liferay Portal returns a portlet’s markup in the exclusive state, the markup contains only the portlet’s markup fragment without <html>, <head>, or <body> tags. Thankfully, every browser we use for testing handles the incomplete markup gracefully and renders the portlet testable. Providing the bare-bones portlet markup, the exclusive state doesn’t render any portal markup or frontend resources (such as CSS or JS). This greatly reduces the time it takes for a browser to load the page. The exclusive window state is useful for quickly testing backend functionality and rendering. I would not recommend testing portlets with complex JavaScript or CSS in the exclusive state. In fact, if you want to test a portlet with any external JavaScript resources at all, you will need to make sure that the resource’s script tag is rendered in the portlet’s body (<div>) markup rather than the portal’s <head> section (which isn’t rendered).2 Certainly, using the exclusive state will not work for all tests, but it can provide significant speed improvements for certain tests.

Pop Up Window State

If you cannot easily move your portlet’s resources into your portlet’s body or if you rely on Liferay Portal’s frontend features (JS and/or CSS), try using the “pop_up” window state instead. In Liferay, any portlet can be accessed in the pop up state with a render URL containing the portlet’s id (p_p_id) and p_p_state=pop_up:


When a portlet is in the pop up state, the portal renders the <html>, <head>, and <body> tags including JavaScript and CSS resources. However, as with the exclusive state, the portal does not render any portal markup or portlets besides the one referenced in the URL. Even though using the exclusive state yielded greater performance benefits, using the pop up state still significantly sped up our tests (probably by about 25%).

Of course, neither of these two states should be used for testing portlets which may interact with each other, since they only render a single portlet. These states also don’t help if you are trying to test other window states (as we do in the TCK). However, while upgrading the TCK, we found another way you can speed up your tests if you are using Selenium.


Of all the Selenium compatible browsers that we test with, HtmlUnit is by far the fastest. And since HtmlUnit is headless, it can be run on a CI server without a window manager. HtmlUnit is not perfect. Its JavaScript engine has trouble running more complicated or cutting edge code, so I wouldn’t recommend it for testing newer browser features like the History API (which SennaJS uses) or HTML5. Nonetheless, it is an excellent browser for testing simple pages quickly. So how do you use HtmlUnit? Well if you are using Selenium, simply add the HtmlUnit dependency to your project (make sure you get the latest one):

<dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>htmlunit-driver</artifactId> <version>2.23.2</version> <scope>test</scope> </dependency>

Then just change your WebDriver implementation to HtmlUnitDriver:

WebDriver webDriver = new HtmlUnitDriver();

The End.

Well… not really. HtmlUnit is a simple tool, but unfortunately configuring it to behave like every other browser is over-complicated and poorly documented. First, you will need to specify the following dependencies to avoid several NoClassDefFoundError/ClassNotFoundException issues:

<dependency> <groupId>xml-apis</groupId> <artifactId>xml-apis</artifactId> <version>1.4.01</version> <scope>test</scope> </dependency> <dependency> <groupId>org.eclipse.jetty.websocket</groupId> <artifactId>websocket-client</artifactId> <version>9.2.18.v20160721</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.5.2</version> <scope>test</scope> </dependency>

Second, I recommend disabling (or filtering) HtmlUnit’s incredibly verbose logging:

// Uncomment to enable HtmlUnit's logs conditionally when the log level is FINER or higher. // if (logLevel.intValue() > Level.FINER.intValue()) { LogFactory.getFactory().setAttribute("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.NoOpLog"); Logger.getLogger("com.gargoylesoftware").setLevel(Level.OFF); Logger.getLogger("org.apache.commons.httpclient").setLevel(Level.OFF); // }

Third, you should extend HtmlUnitDriver:

public class CustomHtmlUnitDriver extends HtmlUnitDriver {

Extending HtmlUnitDriver allows you to force HtmlUnit not to throw exceptions when JavaScript errors occur, and it allows you to silence CSS errors:

@Override protected WebClient modifyWebClient(WebClient initialWebClient) { WebClient webClient = super.modifyWebClient(initialWebClient); // Don't throw exceptions when JavaScript Errors occur. webClient.getOptions().setThrowExceptionOnScriptError(false); // Uncomment to filter CSS errors. // if (logLevel.intValue() > Level.FINEST.intValue()) { webClient.setCssErrorHandler(new SilentCssErrorHandler()); // } return webClient; }

Fourth, you should note that HtmlUnit does not load any images by default,3 so any tests which require images to be loaded should manually call a method like the one below to load them:

public void loadImages() { HtmlPage htmlPage = (HtmlPage) lastPage(); DomNodeList<DomElement> imageElements = htmlPage.getElementsByTagName("img"); for (DomElement imageElement : imageElements) { HtmlImage htmlImage = (HtmlImage) imageElement; try { // Download the image. htmlImage.getImageReader(); } catch (IOException e) { // do nothing. } } }

Finally, always initialize HtmlUnit with JavaScript enabled (and emulate a popular browser):

WebDriver webDriver = CustomHtmlUnitDriver(BrowserVersion.FIREFOX_45, true);

If you want a complete example of how we use HtmlUnit in Liferay Faces, see and

Hopefully, the advice in this post will help you speed up your tests in Liferay. If you’ve found any other tricks that improve your tests’ performance, please post them in the comments below.

  1. Liferay Faces team lead, Neil Griffin, suggested using the exclusive window state to speed up tests.
  2. Liferay Faces Bridge automatically handles this case and moves JS and CSS resource markup into the portlet’s body (<div>) if the exclusive state is being used.
  3. Once HtmlUnit 2.25 and a compatible HtmlUnitDriver are released, you will be able to configure HtmlUnit to download all images automatically.
Kyle Joseph Stiemann 2017-02-15T18:26:41Z
Categories: CMS, ECM

Liferay Design Patterns - Flexible Entity Presentation

Liferay - Wed, 02/15/2017 - 09:18

So I'm going to start a new type of blog series here covering design patterns in Liferay.

As we all know:

In software engineering, a software design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. - Wikipedia

In Liferay, there are a number of APIs and frameworks used to support Liferay-specific general reusable solutions. But since we haven't defined them in a design pattern, you might not be aware of them and/or if/when/how they could be used.

So I'm going to carve out some time to write about some "design patterns" based on Liferay APIs and frameworks. Hopefully they'll be useful as you go forward and designing your own Liferay-based solutions.

Being a first stab at defining these as Liferay Design Patterns, I'm expecting some disagreement on simple things (that design pattern name doesn't seem right) as well as some complex things... Please go ahead and throw your comments at me and I'll make necessary changes to the post. Remember this isn't for me, this is for you. 

And I must add that yes, I'm taking great liberties in using the phrase "design pattern". Most of the Liferay APIs and frameworks I'm going to cover are really combinations of well-documented software design patterns (in fact Liferay source actually implements a large swath of creational, structural and behavioral design patterns in such purity and clarity they are easy to overlook).

These blog posts may not be defining clean and simple design patterns as specified by the Gang of Four, but they will try to live up to the ideals of true design patterns. They will provide general, reusable solutions to commonly occurring problems in the context of a Liferay-based system.

Ultimately the goal is to demonstrate how by applying these Liferay Design Patterns that you too can design and build a Liferay-based solution that is rich in presentation, robust in functionality and consistent in usage and display. By providing the motivation for using these APIs and frameworks, you will be able to evaluate how they can be used to take your Liferay projects to the next level.

Pattern: Flexible Entity Presentation


The intent of the Flexible Entity Presentation pattern is to support a dynamic templating mechanism that supports runtime display generation instead of a classic development-time fixed representation, further separating view management from portlet development.

Also Known As

This pattern is known as and implemented on the Application Display Template (ADT) framework in Liferay.


The problem with most portlets is that the code used to present custom entities is handled as a development-time concern; the UI specifications define how the entity is shown on the page and the development team delivers a solution to satisfy the requirements.  Any change to specifications during development results in a change request for the development team, and post development the change represents a new development project to implement presentation changes.

The inflexibility of the presentation impacts time to market, delivery cycles and development resource allocation.

The Flexible Entity Presentation pattern's modivation is to support a user-driven mechanism to present custom entities in a dynamic way.

The users and admins section on ADTs from starts:

The application display template (ADT) framework allows Liferay administrators to override the default display templates, removing limitations to the way your site’s content is displayed.

ADTs allow the display of an entity to be handled by a dynamic template instead of handled by static code. Don't get hung up on the word content here, it's not just content as in web content but more of a generic reference to any html content your portlet needs to render.

Liferay identified this motivation when dealing with client requests for product changes to adapt presentation in different ways to satisfy varying client requirements.  Liferay created and uses the ADT framework extensively in many of the OOTB portlets from web content through breadcrumbs.  By leveraging ADTs, Liferay defines the entities, i.e. a Bookmark, but the presentation can be overridden by an administrator and an ADT to show the details according to their requirements, and all without a development change by Liferay or a code customization by the client.

Liferay eats its own dogfood by leveraging the ADT framework, so this is a well tested framework for supporting dynamic presentation of entities.

When you look at many of the core portlets, they now support ADTs to manage their display aspects since tweaking an ADT is much simpler than creating a JSP fragment bundle or new custom portlet or some crazy JS/CSS fu in order to affect a presentation change. This flexibility is key for supporting changes in the Liferay UI without extensive code customizations.


The use of ADTs apply when the presentation of an entity is subject to change. Since admins will use ADTs to manage how to display the entities, the presentation does not need to be finalized before development starts. When the ADT framework is incorporated in the design out of the gate, flexibility in the presentation is baked into the design and doors are open to any future presentation changes without code development, testing and deployment.

So there are some fairly clear use cases to apply ADTs:

  • The presentation of the custom entities is likely to change.
  • The presentation of the custom entities may need to change based upon context (list view, single view, etc.).
  • The presentation is not an aspect in the portlet development.
  • The project is a Liferay Marketplace application and presentation customization is necessary.

Notice the theme here, the change in presentation.

ADTs would either not apply or would be overkill for a static entity presentation, one that doesn't benefit from presentation flexibility.


The participants in this pattern are:

  • A custom entity.
  • A custom PortletDisplayTemplateHandler.
  • ADT Resource Portlet Permissions.
  • Portlet Configuration for ADT Selection.
  • Portlet View Leveraging ADTs.

The participants work together with the Liferay ADT framework to support a dynamic presentation for the entity. The implementation details for the participants are covered here:


The custom entity is defined in the Service Builder layer (normally).

The PortletDisplayTemplateHandler implementation is used to feed meta information about the fields and descriptions of the entity to the ADT framework Template Editor UI. The meta information provided will be generally be very coupled to the custom entity in that changes to the entity will usually result in changes to the PortletDisplayTemplateHandler implementation.

The ADT resource portlet permissions must be enabled for the portlet so administrators will be able to choose the display template and edit display templates for the entity.

The portlet configuration panel is where the administrator will chose between display templates, and the portlet view will leverage Liferay's ADT tag library to inject the rendered template into the portlet view.


By moving to an ADT-based presentation of the entity, the template engine (FreeMarker) will be used to render the view.

The template engine will impose a performance cost in supporting the flexible presentation (especially if someone creates a bad template). Implementors should strike a balance between benefitial flexibility and overuse of the ADT framework.

Sample Usage

For practical examples, consider a portal based around a school.  Some common custom entities would be defined for students, rooms, teachers, courses, books, etc.

Consider how often the presentation of the entities may need to change and weigh that against whether the changes are best handled in code or in template.

A course or a teacher entity would likely benefit from the ADT as those entities might need to change the presentation of the course as a brochure-like view needs to change, or the teacher when new additions such as accreditation or course history would change the presentation.

The students and rooms may not benefit from ADTs if the presentation is going to remain fairly static.  These entities might go through future presentation changes but it may be more acceptable to approach those as development projects that are planned and coordinated.

Known Uses

The best known uses come from Liferay itself. The list of OOTB portlets which leverage ADTs are:

  • Asset Publisher
  • Blogs
  • Breadcrumbs
  • Categories Navigation
  • Documents and Media
  • Language Selector
  • Navigation Menu
  • RSS Publisher
  • Site Map
  • Tags Navigation
  • Web Content Display
  • Wiki

This provides many examples for when to use ADTs, the obvious advantage of ADTs (customized displays w/o additional coding) and even hints where ADTs may not work well (i.e. users/orgs control panel, polls, ...).


Well, that's pretty much it for this post. I'd encourage you to go and read the section for styling apps with ADTs as it will help solidify the motivations to incorporate the ADT framework into your design. When you understand how an admin would use ADTs to create a flexible presentation of the Liferay entities, it should help to highlight how you can achieve the same flexibility for your custom assets.

When you're ready to realize these benefits, you can refer to the implementing ADTs page to help with your implementation.


David H Nebinger 2017-02-15T14:18:18Z
Categories: CMS, ECM

Now Available: Social Office to Liferay 7 Upgrade

Liferay - Mon, 02/13/2017 - 18:09

I am pleased to announce the Social Office to Liferay 7 upgrade has been released!


Liferay Social Office was previously an add-on for Liferay Portal CE 6.2 and earlier.  With the release of Liferay 7, most of these components have been added to Liferay Portal CE and Social Office has been removed from Marketplace.  This means that Social Office no longer requires a separate installation, upgrade and support.

Most of the Social Office to Liferay Portal CE 7.0 upgrade process is completed by the standard Liferay upgrade procedure. There are additional steps that need to be completed after the standard upgrade process is finished in order for Social Office to work with Liferay Portal CE 7.0.

Upgrade Overview

Social Office to Liferay 7 Upgrade - We have provided a series of components, an upgrade tool and upgrade instructions that will aid in upgrading Liferay Social Office installations to Liferay 7. The majority of the upgrade follows the standard Liferay 7 Upgrade Guide.  The Social Office upgrade tool ensures that Social Office is properly upgraded to work with Liferay 7.  Changes to Social Office when upgrading to Liferay 7 is covered below.

Social Office Theme - The Social Office Theme has been removed entirely and the Social Office Upgrade Tool will change over to using the out of the box classic theme.  The idea behind this change it the Social Office theme was never meant to be customized.  Now any custom theme can be used instead of being locked into one theme.  

Social Office Site Templates - Social Office installed a set of Site Templates within Liferay Portal upon installation.  These will be carried forward as a part of the upgrade and will work the same as they did in previous versions of Social Office.  For a fresh installation of Liferay 7  the Site Templates can be recreated and used in a similar fashion.

Customized Liferay Portal Apps - Announcements, Document Library Enhancements, Notifications and Bookmarks had specific customizations for Social Office and these have been merged into Liferay 7.  The Chat application is waiting for changes in Liferay Portal 7.0 CE GA4 and will be released shortly after the release of Liferay Portal 7.0 CE GA4

Social Office specific Apps - Microblogs, Contacts Center and Social Office User Profiles are in Liferay 7 out of the box.  Private Messaging, Event List and WYSIWYG are provided as Marketplace apps (links in documentation. The Tasks application has been removed but the source is still available here and can be upgraded to Liferay 7 if need be.

Social Office UX enhancements - The Social Office Dashboard, Social Office Profile and User bar have been replaced by the Liferay 7 Control Panel and the Go To Menu has been replaced by the My Sites application and can be added to a custom theme in Liferay 7 like in previous versions.


A special thanks goes out to everyone who helped by providing feedback and took the time to help test the tooling and the documentation.

Jamie Sammons 2017-02-13T23:09:30Z
Categories: CMS, ECM

Adding Dependencies to JSP Fragment Bundles

Liferay - Wed, 02/08/2017 - 09:10

Recently I was lamenting how I felt that JSP fragment bundles could not introduce new dependencies and therefore the JSP overrides could really not do much more than reorganize or add/remove already supported elements on the page.

For me, this is like only 5% of the use cases for a JSP override. I am much more likely to need to add new functionality that the original portlet developers didn't need to consider.  I need to be able to add new services and use those in the JSP to retrieve entities, and sometimes just really do completely different things w/ the JSP that perhaps were never imagined.

The first time I tried a JSP override to do something similar with a JSP fragment bundle, I was disappointed. My fragment bundle would get to status "Installed" in GoGo, but would go no further because it had unresolved references.  It just couldn't get to the resolved stage.

How could I make the next great JSP fragment override bundle if I couldn't access anything outside the original set of services?

My good friend and coworker Milen Dyankov heard my rant and offered the following insight:

According to the spec:

... requirements and capabilities in a fragment bundle never become part of the fragment's Bundle Wiring; they are treated as part of the host's requirements and capabilities when the fragment is attached to that host.

As for providing declarative services in fragments, again the spec is clear:

A Service-Component manifest header specified in a fragment is ignored by SCR. However, XML documents referenced by a bundle's Service-Component manifest header may be contained in attached fragments.

In another words if your host has Service-Components: OSGI-INF/*.xml then your fragment can put a new XML file in OSGI-INF folder and it will be processed by SCR.

Now sometimes Milen seems to forget that I'm just a mere mortal and not the OSGi guru he is, so while this was perfectly clear to him, it left me wondering if there was anything here that would be my lever to lift the lid and peek inside the JSP fragment bundle realm.

The remainder of this blog is the result of that epic journey .

Service Component Runtime

The SCR is the Apache Felix implementation of the OSGi Declarative Services specification. It's responsible for handling the service registry and lifecycle management of DS components within the OSGi container, starting/stopping the services as bundles are started/stopped, wiring up @Reference dependencies in DS components, etc.

Since the fragment bundle handling comes from the Apache Felix implementation, it's not really a Liferay component and certainly not one that would lend itself to an override in the normal Liferay sense. Anything we do here to access services in the JSP fragment bundles is going to have to go through supported OSGi mechanisms or we won't get anywhere.

So the key for Milen's quote above is the "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments." The rough translation here - we might be able to provide an override XML file for one of the bundle host's components and possibly inject new dependencies. Yes, as a rough translation it really assumes that you know more than what what you might (and especially more that what I did), so let's divert for a second.

Service Component Manifest XML Documents

So the BND tool that we all know and love, that guy actually does many, many things for us when it builds a bundle jar. One of those tasks is to generate the service component manifest and all of the XML documents. The contents of all of these files is basically the metadata the SCR will need for dependency resolution, component wiring, etc.

Any time you annotate your java class with @Component you are indicating it is a DS service. When BND is processing the annotations, it's going to add an entry to the Service Component Manifest (so the SCR will process the component during bundle start). The Service Component Manifest is the Service-Component key in the bundle's MANIFEST.MF file, and it lists each individual XML file in the OSGI-INF, one for each component.

These XML files define the component for the SCR, specifying the java class that implements the component, the service it provides, all reference details and properties for the component.

So if you take any bundle jar you have, expand it and check out the MANIFEST.MF file and look for the Service-Component key. You'll find there's one OSGI-INF/com.example.package.JavaClass.xml file (where it is your package and class) for each component defined in your bundle.

If you open one of the XML files, you can see the structure for a component definition, and it is easy to see how things that you set in the @Component annotation attributes have been mapped into the XML file.

Now that we know about the manifest and XML docs, we can get back to our regularly scheduled program.

Overriding An SCR XML

So remember, we should be able to override one of these files because "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments."

This hints that we cannot add a new file, but we could override an existing one.

So to me, this is the key question - can we create an override XML file to introduce a new dependency, one that really cannot be directly bound to the original (since we can't modify the class) so at least the bundle would have a new dependency and the JSP would be happy?

Well I actually used all of this newfound knowledge to work up a test and tried it out, but it fails. It didn't make any sense...

Return To The Jedi

"Milen, my SCR XML override isn't working."

"Overrides won't work because the XML files are loaded by the class loader, and the host bundle comes before the fragment bundle so SCR ignores the override.  You can't override the XML, you can only add a new one to the fragment bundle."

"But Milen you said I couldn't add new XML files, only those listed in the Service-Component in the MANIFEST.MF file of the host bundle will be used by SCR during loads."

"Change your Service-Component key to use a wildcard like OSGI-INF/* and SCR will load the ones from the host bundle as well as the fragment bundle. It's considered bad practice, but it would work."

"I can't do that, Milen, I'm doing a JSP fragment bundle on a Liferay host bundle, I can't change the Service-Component manifest value and, if I could, I wouldn't need to do any of this fragment bundling in the first place because I would just apply my change directly in the host bundle and be done with it."

"Well then the SCR XML override isn't going to work. Let's try something else..."

Example Project

After working out a new plan of attack, I was going to need an example project to test this all out and verify that it was going to work. The example must include a JSP fragment bundle override and introduce another previously unused service. I don't really want to do any more coding than necessary here, so let's pick something to do out of the portal JSPs and services.

Requirement: On login form, display the current count of membership requests.

Pretty simple, maybe part of some automated membership request handling being added to the portal or trying to show how popular the site is by showing count of how many are waiting to get in.

But it gives us the goal here, we want to access the MemberRequestLocalService inside of the login.jsp page of the login-web host bundle. The service is defined in the com.liferay.invitation.invite.members.api bundle and is not currently connected in any way with the login web module.

Creating The Fragment Bundle

I'll continue my pattern of using blade on the command line, but of course you're free to leverage tools provided by your IDE.

blade create -t fragment -h com.liferay.login.web -H 1.1.4 login-web-fragment

Remember to choose the fragment bundle version from your local portal so you'll override the right one and make OSGi/SCR happy.

Copy in the login.jsp page from the portal source. After the include of init.jsp, add the following lines:

<%@ page import="com.liferay.invitation.invite.members.service.MemberRequestLocalService" %> <% // get the service from the render request attributes MemberRequestLocalService memberRequestLocalService = (MemberRequestLocalService) renderRequest.getAttribute("MemberRequestLocalService"); // get the current count int currentRequestCount = memberRequestLocalService.getMemberRequestsCount(); // display it somewhere on the page... %>

Very simple. Doesn't really display, but that's not the point in this blog.

Now if you build and deploy this guy as-is, if you check him you'll see his state in GoGo is "Installed". This is not good as it is not where it needs to be for the JSP fragment to work.

Adding The Dependency

So we have to go back to how the OSGi handles the fragment bundles... So when OSGi is loading the fragment, effectively the MANIFEST.MF items from the fragment bundle will be merged with those from the host bundle.

For me, that means I have to list my dependency in build.gradle and trust BND will add the right Import-Package declaration to the final MANIFEST.MF file.

Then, when the framework is loading my fragment bundle, my Import-Package from the fragment will be added to the Import-Package of the host bundle and all should be good.

JSP fragment bundles created by blade do not have dependencies listed in the build.gradle file (in fact it is completely empty), so let's add the dependency stanza:

dependencies { compile group: "com.liferay", name: "com.liferay.invitation.invite.members.api", version: "2.1.1" }

We only need to add the dependency that is missing from the host bundle, the one with the service we're going to pull in.

After building, you can unpack the jar and check the MANIFEST.MF file and see that it does now have the Import-Package declaration, so if SCR does actually do the merge while loading, we should be in business.

Deploy your new JSP fragment bundle and if you check the bundle status in GoGo, you'll see it is now "Resolved".


Injecting The Reference

Not so fast. If you try to log into your portal, you'll get the "portlet is temporarily unavailable" message and the log file will have a NullPointerException and a big stack trace. We've totally broken the login portlet because login.jsp depends upon the service but it is not set.

If you check the JSP change I shared, I'm pulling the service instance from the render request attributes. But how the heck does it get in there when we cannot change the host bundle to inject it in the first place?

We're going to do this using another OSGi module with a new component that implements the PortletFilter interface, specifically a RenderFilter.

@Component( immediate = true, property = { "" + LoginPortletKeys.LOGIN, "" + LoginPortletKeys.FAST_LOGIN }, service = PortletFilter.class ) public class LoginRenderFilter implements RenderFilter { @Override public void doFilter(RenderRequest request, RenderResponse response, FilterChain chain) throws IOException, PortletException { // set the request attribute so it is available when the JSP renders request.setAttribute("MemberRequestLocalService", _memberRequestLocalService); // let the filter chain do it's thing chain.doFilter(request, response); } @Override public void init(FilterConfig filterConfig) throws PortletException { } @Override public void destroy() { } @Reference(unbind = "-") protected void setMemberRequestLocalService(final MemberRequestLocalService memberRequestLocalService) { _memberRequestLocalService = memberRequestLocalService; } private MemberRequestLocalService _memberRequestLocalService; }

So here we are intercepting the render request using the portlet filter. We inject the service into the request attributes before invoking the filter chain to complete the rendering; that way when the JSP page from the fragment bundle is used, the attribute will be set and ready.

Build and deploy your new component. Once it starts, refresh your browser and try to log in. You should now see the login portlet again. Not that we did anything fancy here, we're just proving that the service reference is not null and is available for the JSP override to use.


So we took a roundabout path to get here, but we've seen how we can create a JSP fragment bundle to override portal JSPs, add a dependency to the fragment bundle that gets included as a dependency in the host bundle, and we created a portlet filter bundle to inject the service reference in the request attributes so it would be available to the JSP page.

Two different bundle jars, but it certainly gets the job done.

Also along the way we learned some things about what the SCR is, how fragment bundles work, as well as some of the internals of our OSGi bundle jars and the role that BND plays in their construction.  Useful information, IMHO, that can help you while learning Liferay 7 CE/Liferay DXP.

This now opens some new paths for you to pursue for your JSP fragment bundles.  Just follow the outline here and you should be good to go.

Find the project code for the blog here:

David H Nebinger 2017-02-08T14:10:49Z
Categories: CMS, ECM

Building In Upgrade Support

Liferay - Tue, 02/07/2017 - 10:25

One of the things that I never really used in 6.x was the Liferay upgrade APIs.

Sure, I knew about the Release table and stuff, but it just seemed kind of cumbersome to not only to build out your code but on top of that track your release and support an upgrade process on top of all of that. I mean, I'm a busy guy and once this project is done I'm already behind on the next one.

When you start perusing the Liferay 7 source code, though, one thing you'll notice is that there is upgrade logic all over the place. Pretty much every portlet module includes an upgrade process to support upgrading from version "0.0.0" to version "1.0.0" (this is the upgrade process to change from 6.x to the new 7.x module version).

And you'll even find that some modules include upgrades from versions "1.0.0" to "1.0.1" to support the independent module versioning that was the promise of OSGi.

So now that I'm trying to exclusively build modules, I'm thinking it's an appropriate time to dig into the upgrade APIs and see how they work and how I can incorporate upgrades into my modules.

The New Release

So previously we'd have to manage the Release entity ourselves, but Liferay has graciously taken that over for us. Your bnd.bnd file, where you specify your module version, well that now becomes the foundation of your Release handling. And just like the portal module, an absense of a Release is technically version "0.0.0" so now you can handle first-time deployment stuff too.

The Upgrade API

Before diving into implementation, let's take a little time to look over some of the classes and interfaces Liferay provides as part of the Upgrade API. We'll start with the classes from the com.liferay.portal.kernel.upgrade package:

Name Purpose UpgradeStep This is the main interface that must be implemented for all upgrade logic. When registering an upgrade, an ordered list of UpgradeSteps are provided and the upgrade process will execute these in order to complete an upgrade. DummyUpgradeStep The simplest of all concrete implementations of the UpgradeStep interface, this upgrade step does nothing. But it is a useful step to use for handling new deployments. UpgradeProcess This is a handy abstract base class to use for all of your upgrade steps. It implements the UpgradeStep interface and has support for database-specific alterations should you need them. Base* These are abstract base classes for upgrade steps typically used by the portal for managing upgrades from portlet wars to new module-based portlets. For example, the BaseUpgradePortletId class is used to support fixing the portlet ids from older id-based portlet names to the new OSGi portlet ids based on class name. These classes are good foundations if you are building an upgrade process to move your own portlets from wars to bundles or want to handle upgrades from 6.x compatibility to 7.x. util.* For those wanting to support a database upgrade, the com.liferay.portal.kernel.upgrade.util package contains a bunch of support classes to assist with altering tables, columns, indexes, etc. Registering The Upgrade

All upgrade definitions need to be registered. That's pretty easy, of course, when one is using OSGi. To register an upgrade, you just need a component that implements the UpgradeStepsRegistrator interface.

But first a word about code structure...

So Liferay's recommendation is to use a java package to contain all of your upgrade code, typically in a package named upgrade, is part of your portlet web module, and the package is at the same level as your portlet package (if you have one).

So if your portlet code is in com.example.myapp.portlet, you're going to have a com.example.myapp.upgrade package.

In here you'll have sub-packages for all upgrade versions supported, so you might have "v1_0_0" and "v1_0_1", etc.  Upgrade step implementations will be in the subpackage for the upgrade level they support.

So now we have enough details to start building out the upgrade definition. Start by updating your build.gradle file to introduce a new dependency:

compileOnly group: "com.liferay", name: "com.liferay.portal.upgrade", version: "2.3.0"

This pulls in some utility classes we'll be using below.

Let's assume we're building a brand new module and just want to get a placeholder upgrade definition in place. This is quite easily done by adding a single component to our project:

@Component(immediate = true, service = UpgradeStepRegistrator.class) public class ExampleUpgradeStepRegistrator implements UpgradeStepRegistrator { @Activate protected void activate(final BundleContext bundleContext) { _bundleName = bundleContext.getBundle().getSymbolicName(); } @Override public void register(Registry registry) { // for first time deployments this will start by creating the initial release record // with the initial version of 1.0.0. // Also use the dummy upgrade step since we're not doing anything in this upgrade. registry.register(_bundleName, "0.0.0", "1.0.0", new DummyUpgradeStep()); } private String _bundleName; }

So that's pretty much it.  Including this class in your component will result in it registering as a Release with version 1.0.0 and you have nothing else to worry about.

When you're ready to release verison 1.1.0 of your component, things get a little more fun.

In your v1_1_0 package you'll create classes that implement the UpgradeStep interface typically by extending the UpgradeProcess abstract base class or perhaps a more appropriate class from the above table. Either way you'll define separate classes to handle different aspects of the upgrade.

We'd then come back to the UpgradeStepRegistrator implementation to add the upgrade steps by including another registry call:

registry.register(_bundleName, "1.0.0", "1.1.0", new UpgradeMyTableStep(), new UpgradeMyDataStep(), new UpgradeMyConfigAdmin());

When processing this upgrade definition, the Upgrade service will invoke the upgrade steps in the order provided.  So obviously you should take care to order your steps such that they can succeed given only what steps have been processed before and not on subsequent steps.

Database Upgrades

So one of the common issues with Service Builder modules is that the tables will be created when you first deploy the module to a new environment, but updates will not be processed. I think we could argue on one side that it is a bug or on the other side that expecting Service Builder to track data model changes is far outside of the tool's responsibility.

I'm not going to argue it either way; we are where we are, and solving from this point is all I'm really worried about.

As I previously stated, the com.liferay.portal.kernel.upgrade.UpgradeProcess is going to be the perfect base class to accommodate a database update.

UpgradeProcess extends com.liferay.portal.kernel.dao.db.BaseDBProcess which brings the following methods:

  • hasTable() - Determines if the listed table exists.
  • hasColumn() - Determines if the table has the listed column.
  • hasColumnType() - Determines if the listed column in the listed table has the provided type.
  • hasRows() - Determines if the listed table has rows (in order to provide logic to migrate data during an upgrade).
  • runSQL() - Runs the given SQL statement against the database.

UpgradeProcess itself has two upgradeTable() methods both of which add a new table to the database.  The difference between the two, one is simple and will create a table based on the name and a multidimensional array of column detail objects, the second one has additional arguments for fixed SQL for the table, indexes, etc.

Additionally UpgradeProcess has a number of inner support classes to facilitate table alterations:

  • AlterColumnName - A class to encapsulate details to change a column name.
  • AlterColumnType - A class to encapsulate details to change a column type.
  • AlterTableAddColumn - A class to encapsulate details to add a new column to a table.
  • AlterTableDropColumn - A class to encapsulate details to drop a column from a table.

Let's write a quick upgrade method to add a column, change another column's name and another column's type.  To facilitate this, our class will extend UpgradeProcess and will need to implement a doUpgrade() method:

public void doUpgrade() throws Exception { // create all of the alterables Alterable addColumn = new AlterTableAddColumn("COL_NEW"); Alterable fixColumn = new AlterColumnType("COL_NEW", "LONG"); Alterable changeName = new AlterColumnName("OLD_COL_NAME", "NEW_COL_NAME"); Alterable changeType = new AlterColumnType("ENTITY_PK", "LONG"); // apply the alterations to the MyEntity Service Builder entity. alter(MyEntity.class, addColumn, fixColumn, changeName, changeType); // done }

So the alterations are based on your ServiceBuilder entity but otherwise you don't have to worry much about SQL to apply these kinds of alterations to your entity's table.


Using just what has been provided here, you can integrate a smooth and automatic upgrade process into your modules, including upgrading your Service Builder's entity backing tables since SB won't do that for you.

Where can you find more details on doing some nitty-gritty upgrade activities? Why, the Liferay source of course.  Here's a fairly complex set of upgrade details to start your review:



My good friend and coworker Nathan Shaw forwarded me a reference that I think is worth adding here.  Thanks Nathan!

David H Nebinger 2017-02-07T15:25:18Z
Categories: CMS, ECM

Liferay 7 CE/Liferay DXP Scheduled Tasks

Liferay - Mon, 02/06/2017 - 19:33

In Liferay 6.x, scheduled tasks were kind of easy to implement.

I mean, you'd implement a class that implements the Liferay Message Bus's MessageListener interface and then add the details in the <scheduler-entry /> sections in your liferay-portlet.xml file and you'd be off to the races.

Well, things are not so simple with Liferay 7 CE / Liferay DXP. In fact, I couldn't find a reference anywhere on, so I thought I'd whip up a quick blog on them.

Of course I'm going to pursue this as an OSGi-only solution.

StorageType Information

The first thing we need to know before we schedule a job, we should first discuss the supported StorageTypes. Liferay has three supported StorageTypes:

  • StorageType.MEMORY_CLUSTERED - This is the default storage type, one that you'll typically want to shoot for. This storage type combines two aspects, MEMORY and CLUSTERED. For MEMORY, that means the job information (next run, etc.) are only held in memory and are not persisted anywhere. For CLUSTERED, that means the job is cluster-aware and will only run on one node in the cluster.
  • StorageType.MEMORY - For this storage type, no job information is persisted. The important part here is that you may miss some job runs in cases of outages. For example, if you have a job to run on the 1st of every month but you have a big outage and the server/cluster is down on the 1st, the job will not run. And unlike in PERSISTED, when the server comes up the job will not run even though it was missed. Note that this storage type is not cluster-aware, so your job will run on every node in the cluster which could cause duplicate runs.
  • StorageType.PERSISTED - This is the opposite of MEMORY as job details will be persisted in the database. For the missed job above, when the server comes up on the 2nd it will realize the job was missed and will immediately process the job. Note that this storage type relies on cluster-support facilities in the storage engine (Quartz's implementation discussed here:

So if you're in a cluster, you'll want to stick with either MEMORY_CLUSTERED or PERSISTED to ensure your job doesn't run on every node (i.e. you're running a report to generate a PDF and email, you wouldn't want your 4 node cluster doing the report 4 times and emailing 4 copies). You may want to stick with the MEMORY type when you have, say, an administrative task that needs to run regularly on all nodes in your cluster.

Choosing between MEMORY[_CLUSTERED] and PERSISTED is how resiliant you need to be in the case of missed job fire times. For example, if that monthly report is mission critical, you might want to elect for PERSISTED to ensure the report goes out as soon as the cluster is back up and ready to pick up the missed job. However, if they are not mission critical it is easier to stick with one of the MEMORY options.

Finally, even if you're not currently in a cluster, I would encourage you to make choices as if you were running in a cluster right from the beginning. The last thing you want to have to do when you start scaling up your environment is trying to figure out why some previous regular tasks are not running as they used to when you had a single server. 

Adding StorageType To SchedulerEntry

We'll be handling our scheduling shortly, but for now we'll worry about the SchedulerEntry. The SchedulerEntry object contains most of the details about the scheduled task to be defined, but it does not have details about the StorageType. Remember that MEMORY_CLUSTERED is the default, so if you're going to be using that type, you can skip this section. But to be consistent, you can still apply the changes in this section even for the MEMORY_CLUSTERED type.

To add StorageType details to our SchedulerEntry, we need to make our SchedulerEntry implementation class implement the com.liferay.portal.kernel.scheduler.ScheduleTypeAware interface. When Liferay's scheduler implementation classes are identifying the StorageType to use, it starts with MEMORY_CLUSTERED and will only use another StorageType if the SchedulerEntry implements this interface.

So let's start by defining a SchedulerEntry wrapper class that implements the SchedulerEntry interface as well as the StorageTypeAware interface:

public class StorageTypeAwareSchedulerEntryImpl extends SchedulerEntryImpl implements SchedulerEntry, StorageTypeAware { /** * StorageTypeAwareSchedulerEntryImpl: Constructor for the class. * @param schedulerEntry */ public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry) { super(); _schedulerEntry = schedulerEntry; // use the same default that Liferay uses. _storageType = StorageType.MEMORY_CLUSTERED; } /** * StorageTypeAwareSchedulerEntryImpl: Constructor for the class. * @param schedulerEntry * @param storageType */ public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry, final StorageType storageType) { super(); _schedulerEntry = schedulerEntry; _storageType = storageType; } @Override public String getDescription() { return _schedulerEntry.getDescription(); } @Override public String getEventListenerClass() { return _schedulerEntry.getEventListenerClass(); } @Override public StorageType getStorageType() { return _storageType; } @Override public Trigger getTrigger() { return _schedulerEntry.getTrigger(); } public void setDescription(final String description) { _schedulerEntry.setDescription(description); } public void setTrigger(final Trigger trigger) { _schedulerEntry.setTrigger(trigger); } public void setEventListenerClass(final String eventListenerClass) { _schedulerEntry.setEventListenerClass(eventListenerClass); } private SchedulerEntryImpl _schedulerEntry; private StorageType _storageType; }

Now you can use this class to wrap a current SchedulerEntryImpl yet include the StorageTypeAware implementation.

Defining The Scheduled Task

We have all of the pieces now to build out the code for a scheduled task in Liferay 7 CE / Liferay DXP:

@Component( immediate = true, property = {"cron.expression=0 0 0 * * ?"}, service = MyTaskMessageListener.class ) public class MyTaskMessageListener extends BaseSchedulerEntryMessageListener { /** * doReceive: This is where the magic happens, this is where you want to do the work for * the scheduled job. * @param message This is the message object tied to the job. If you stored data with the * job, the message will contain that data. * @throws Exception In case there is some sort of error processing the task. */ @Override protected void doReceive(Message message) throws Exception {"Scheduled task executed..."); } /** * activate: Called whenever the properties for the component change (ala Config Admin) * or OSGi is activating the component. * @param properties The properties map from Config Admin. * @throws SchedulerException in case of error. */ @Activate @Modified protected void activate(Map<String,Object> properties) throws SchedulerException { // extract the cron expression from the properties String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION); // create a new trigger definition for the job. String listenerClass = getEventListenerClass(); Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression); // wrap the current scheduler entry in our new wrapper. // use the persisted storaget type and set the wrapper back to the class field. schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(schedulerEntryImpl, StorageType.PERSISTED); // update the trigger for the scheduled job. schedulerEntryImpl.setTrigger(jobTrigger); // if we were initialized (i.e. if this is called due to CA modification) if (_initialized) { // first deactivate the current job before we schedule. deactivate(); } // register the scheduled task _schedulerEngineHelper.register(this, schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH); // set the initialized flag. _initialized = true; } /** * deactivate: Called when OSGi is deactivating the component. */ @Deactivate protected void deactivate() { // if we previously were initialized if (_initialized) { // unschedule the job so it is cleaned up try { _schedulerEngineHelper.unschedule(schedulerEntryImpl, getStorageType()); } catch (SchedulerException se) { if (_log.isWarnEnabled()) { _log.warn("Unable to unschedule trigger", se); } } // unregister this listener _schedulerEngineHelper.unregister(this); } // clear the initialized flag _initialized = false; } /** * getStorageType: Utility method to get the storage type from the scheduler entry wrapper. * @return StorageType The storage type to use. */ protected StorageType getStorageType() { if (schedulerEntryImpl instanceof StorageTypeAware) { return ((StorageTypeAware) schedulerEntryImpl).getStorageType(); } return StorageType.MEMORY_CLUSTERED; } /** * setModuleServiceLifecycle: So this requires some explanation... * * OSGi will start a component once all of it's dependencies are satisfied. However, there * are times where you want to hold off until the portal is completely ready to go. * * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED * component which will not be available until, surprise surprise, the portal has finished * initializing. * * With this reference, this component activation waits until portal initialization has completed. * @param moduleServiceLifecycle */ @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-") protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) { } @Reference(unbind = "-") protected void setTriggerFactory(TriggerFactory triggerFactory) { _triggerFactory = triggerFactory; } @Reference(unbind = "-") protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) { _schedulerEngineHelper = schedulerEngineHelper; } // the default cron expression is to run daily at midnight private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?"; private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class); private volatile boolean _initialized; private TriggerFactory _triggerFactory; private SchedulerEngineHelper _schedulerEngineHelper; }

So the code here is kinda thick, but I've documented it as fully as I can.

The base class, BaseSchedulerEntryMessageListener, is a common base class for all schedule-based message listeners. It is pretty short, so you are encouraged to open it up in the source and peruse it to see what few services it provides.

The bulk of the code you can use as-is. You'll probably want to come up with your own default cron expression constant and property so you're not running at midnight (and that's midnight GMT, cron expressions are always based on the timezone your app server is configured to run on).

And you'll certainly want to fill out the doReceive() method to actually build your scheduled task logic.

One More Thing...

One thing to keep in mind, especially with the MEMORY and MEMORY_CLUSTERED storage types: Liferay does not do anything to prevent running the same jobs multiple times.

For example, say you have a job that takes 10 minutes to run, but you schedule it to run every 5 minutes. There's no way the job can complete in 5 minutes, so multiple jobs start piling up. Sure there's a pool backing the implementation to ensure the system doesn't run away and die on you, but even that might lead to disasterous results.

So take care in your scheduling. Know what the worst case scenario is for timing your jobs and use that information to define a schedule that will work even in this situation.

You may even want to consider some sort of locking or semaphore mechanism to prevent the same job running in parallel at all.

Just something to keep in mind...


So this is how all of those scheduled tasks from liferay-portlet.xml get migrated into the OSGi environment. Using this technique, you now have a migration path for this aspect of your legacy portlet code.

David H Nebinger 2017-02-07T00:33:38Z
Categories: CMS, ECM

Toturial of Creating JSON Web Service using Service Builder

Liferay - Mon, 02/06/2017 - 18:38
This article talks about how to create json web service based on Service Builder Service.   Knowledge: Service Builder JSON Web Service   When we intent to make a service to be a web provider to serve json web service, we can utilize Service Builder to build our json web service. We have an article to talk about how to use Service Builder to build.   Requirement: I need to build a trading track system to record monthly trading. I need to build a web service to pass data as JSON of my bank's monthly trading. (Eventhough in real world the monthly data normally from a query rather than being save as a record, but let's make it like this)   Step 1, Define Your Entity and Service.   So first I need to build an entity and services from Service Builder. Please check Creating Service Builder MVC Portlet in Liferay 7 with Liferay IDE 3 blog as a guidance.   I call my project "monthly-trading". Define our entity as following service.xml. Note that I set my remote-service to be "true". <?xml version="1.0"?> <!DOCTYPE service-builder PUBLIC "-//Liferay//DTD Service Builder 7.0.0//EN" ""> <service-builder package-path=""> <namespace>Banking</namespace> <entity local-service="true" name="MonthlyTrading" remote-service="true" uuid="true"> <!-- PK fields --> <column name="monthlyTradingId" primary="true" type="long" /> <!-- Group instance --> <column name="groupId" type="long" /> <!-- Audit fields --> <column name="companyId" type="long" /> <column name="userId" type="long" /> <column name="userName" type="String" /> <column name="createDate" type="Date" /> <column name="modifiedDate" type="Date" /> <!-- Other fields --> <column name="year" type="int" /> <column name="month" type="int" /> <column name="volume" type="int" /> <!-- Order --> <order by="asc"> <order-column name="month" /> </order> <!-- Finder methods --> <finder name="Year" return-type="Collection"> <finder-column name="year" /> </finder> </entity> </service-builder>   Step 2 Build Your Service   Once you have finished, you can run buildService to build your service.   After all interfaces and impl has been generated, you can modify your LocalServiceImpl to add your own local service implementation. In my example I simply added an add method in MonthlyTradingLocalServiceImpl ignoring all validation.   public MonthlyTrading addMonthlyTrading(int year, int month, int volume) { long pk = counterLocalService.increment(); MonthlyTrading monthlyTrading = monthlyTradingPersistence.create(pk); monthlyTrading.setYear(year); monthlyTrading.setMonth(month); monthlyTrading.setVolume(volume); return monthlyTradingPersistence.update(monthlyTrading); } public List<MonthlyTrading> getMonthlyTradingByYear(int year) { return monthlyTradingPersistence.findByYear(year); }   Run buildService again to regenerate interfaces.   Now I can modify my ServiceImpl to call my LocalService. @JSONWebService public MonthlyTrading addMonthlyTrading(int year, int month, int volume) { return monthlyTradingLocalService.addMonthlyTrading(year, month, volume); } @JSONWebService public List<MonthlyTrading> getMonthlyTradingByYear(int year) { return monthlyTradingLocalService.getMonthlyTradingByYear(year); }   Run buildService again and deploy.   By utilizing @JSONWebService Annotation, you can make you class to be whitelist/blacklist based, you can enable/igore a method in JSON web service. For more detail please check Liferay Dev KB.   Best Practice tips: It's a good practice to check user permission in Service Impl to make sure all remote service call is secure.   Step 3, Use Your Remote Service   Now you can navigate to http://localhost:8080/api/jsonws in your browser. Choose "banking" in Context name. Now the custom json web service is in the list.     You can find JavaScript Example, curl Example, URL Example after you invoke the service.     This is how we add a JSON web service through service builder.   Hope you enjoy it.   Source code of this toturial is here:   Workable jars for test: API, Service   Related article: Neil Jin 2017-02-06T23:38:01Z
Categories: CMS, ECM

Introducing Liferay Workspace

Liferay - Thu, 02/02/2017 - 15:09
A search for a better SDK

A little over a year ago, the Dev Tools Team began a new project hoping to fix the issues we encountered with the Plugins SDK. We had just finished modularizing our developer tools and were considering revamping the Plugins SDK to use all the new tools. This in itself would be a major improvement since we would no longer require a 300mb bundle just to use it. We decided, however, that it was time for a change.

Our first commits for Liferay Workspace started early last year (January 7, 2016 to be exact) and we've made a huge amount of progress since then. Many of you may have already created projects in Workspace since it's been in the wild for a while now. If you haven’t tried it yet, you can create one by first installing Blade CLI and then running blade init <my-project>. You should experiment with it a bit before proceeding to get some context because I won't be covering every aspect, but instead, just our design decisions.

Why we built Workspace the way we did?

Liferay Workspace is an opinionated, yet highly configurable multi-level project structure. It’s a combination of the strengths we found in the Plugins SDK and also what I found in other build tools such as Maven, Ruby on Rails, Gradle, and feedback from our users.

Having structure is good. Forcing structure is bad.

The most polarizing complaint with the Plugins SDK we heard was about the structure. It was laid out with preconfigured folders of hooks, layouttpl, portlets, themes, webs, etc. Some people love this structure since it was clear where to place their projects, but others hated it.

When developing Workspace, we decided that the structure was good for folks new to Liferay, but could be limiting for those who knew it well. Workspace comes with the following folders: configs, modules, themes, and wars. It’s clear what should go in each folder. If you don't like it, however, you can rename them, have multiple versions of the same types, or delete them.

Upgrading your SDK needs to be easy

The Plugins SDK was completely configured in the XML files. Since everything Liferay did was through the build.xml files, and we didn't provide any way for you to make changes easily, it meant upgrading your SDK was a chore.

While the original design of Liferay Workspace was to do everything in the build.gradle files, we quickly realized that it made upgrading a problem again. We rewrote everything into a plugins; issue solved. The biggest benefit from this is now we’re rapidly adding new features to Workspace and are constantly releasing new versions.

Project management should be automatic

One of the nice things with Ant was that any project with a build.xml was automatically included in the build. If you've used Gradle, you know that everything needs to be manually included in the settings.gradle file. I found this annoying coming from the Plugins SDK.

To combat this, we applied the Workspace Gradle plugin, which automatically adds projects as long as they have certain heuristics that let us determine they match the right folder. For example, modules should have a BND file, themes should have a Gulp file, WARs (which were a little harder to determine) should contain a source folder, etc. If they meet this criteria, they’re added to your project build.

Everyone should be on the same environment

The hardest thing about using the Plugins SDK was ensuring everyone was using the same thing. Bundles were configured individually. Everyone had a different version of Ant installed. Configurations were all over the place.

The best thing about using a Gradle project is that you don't need Gradle installed to use it. We took that concept further and allowed Workspace users to download a bundle and configure it all as part of the build process. This means that every developer used the same Liferay Portal version. This also means that if certain parts of the Portal needed to be configured in a similar fashion, the project could configure it for everyone. Moreover, this means that CI could more effectively join the build process because it was running the same things as everyone else.

You should decide your own build tools

One thing the Plugins SDK never gave you was a choice. You had to use Ant to build all your plugins using Ant even if the tool wasn’t the best tool for the job.

In Workspace, we decided to be more flexible with our tools. You can build your theme using Node or the build tool.  If you still need to use the Plugins SDK for legacy portlets, we’ve provided backwards compatibility.  And if you don’t want to use Gradle at all, we’ve provided a Maven version of Workspace. Surprise!

Providing a Maven Workspace was not part of our initial vision, but as we began to work out our Maven story, it made more and more sense. We finally published the archetype this week. If you would like to try it, execute

mvn archetype:generate -DarchetypeGroupId=com.liferay -DarchetypeArtifactId=com.liferay.project.templates.workspace -DarchetypeVersion=1.0.2

For documentation (sorry, the Maven Workspace docs are coming):


For an example of it in use (includes examples of Maven):

Please try out the different versions of Workspace. We are eager to hear your feedback.

David Truong 2017-02-02T20:09:17Z
Categories: CMS, ECM

Debugging Search Queries in Liferay

Liferay - Thu, 02/02/2017 - 05:55

If you want to find out more information about how a search is being executed in Liferay, you can modify the logging settings to output search queries to the log. To do this, go into Control Panel-> Server Administration -> Log Levels -> Add Category and add a new entry for the class for log level DEBUG. (Please note that depending on your situation, changing the logging can dramatically impact performance, so it's probably only sensible to do this in a development environment). 

Then you can see the search queries in the Liferay log. For example, here is a search for web content related to "france" in 6.2 using Lucene: 

10:52:29,342 DEBUG [http-bio-8080-exec-19][SearchEngineUtil:663] Search query +(+((+(entryClassName:com.liferay.portlet.journal.model.JournalArticle) -(status:8)) (+(entryClassName:com.liferay.portlet.journal.model.JournalFolder) -(status:8)) (+(entryClassName:com.liferay.portlet.messageboards.model.MBMessage) -(status:8) +(discussion:true) +((-(status:8) +(classNameId:10109)) (-(status:8) +(classNameId:10013))))) +(+(groupId:10182) +(scopeGroupId:10182))) +(assetCategoryTitles:france* assetTagNames:france* comments:france content:france description:france properties:france title:france url:france userName:france* assetCategoryTitles_en_US:france* classPK:france content_en_US:france description_en_US:france entryClassPK:france title_en_US:france type:france articleId:FRANCE)

Unfortunately, information about how a search is sorted is not added to the log. But the query is very helpful for understanding how Liferay's search works!

Allen Ziegenfus 2017-02-02T10:55:45Z
Categories: CMS, ECM

Liferay/OSGi Annotations - What they are and when to use them

Liferay - Thu, 02/02/2017 - 01:14

When you start reviewing Liferay 7 CE/Liferay DXP code, you run into a lot of annotations in a lot of different ways.  They can all seem kind of overwhelming when you first happen upon them, so I thought I'd whip up a little reference guide, kind of explaining what the annotations are for and when you might need to use them in your OSGi code.

So let's dive right in...


So in OSGi world this is the all important "Declarative Services" annotation defining a service implementation.  DS is an aspect of OSGi for declaring a service dynamically and has a slew of plumbing in place to allow other components to get wired to the component.

There are three primary attributes that you'll find for this annotation:

  • immediate - Often set to true, this will ensure the component is started right away and not wait for a reference wiring or lazy startup.
  • properties - Used to pass in a set of OSGi properties to bind to the component.  The component can see the properties, but more importantly other components will be able to see the properties too.  These properties help to configure the component but also are used to support filtering of components.
  • service - Defines the service that the component implements.  Sometimes this is optional, but often it is mandatory to avoid ambiguity on the service the component wants to advertise.  The service listed is often an interface, but you can also use a concrete class for the service.

When are you going to use it?  Whenever you create a component that you want or need to publish into the OSGi container.  Not all of your classes need to be components.  You'll declare a component when code needs to plug into the Liferay environment (i.e. add a product nav item, define an MVC command handler, override a Liferay component) or to plug into your own extension framework (see my recent blog on building a healthcheck system).


This is the counterpart to the @Component annotation.  @Reference is used to get OSGi to inject a component reference into your component. This is a key thing here, since OSGi is doing the injection, it is only going to work on an OSGi @Component class.  @Reference annotations are going to be ignored in non-components, and in fact they are also ignored in subclasses too.  Any injected references you need, they must be done in the @Component class itself.

This is, of course, fun when you want to define a base class with a number of injected services; the base class does not get the @Component annotation (because it is not complete) and @Reference annotations are ignored in non-component classes, so the injection will never occur.  You end up copying all of the setters and @Reference annotations to all of the concrete subclasses and boy, does that get tedious.  But it is necessary and something to keep in mind.

Probably the most common attribute you're going to see here is the "unbind" attribute, and you'll often find it in the form of @Reference(unbind = "-") on a setter method. When you use a setter method with @Reference, OSGi will invoke the setter with the component to use, but the unbind attribute indicates that there is no method to call when the component is unbinding, so basically you're saying you don't handle components disappearing behind your back.  For the most part this is not a problem, server starts up, OSGi binds the component in and you use it happily until the system shuts down.

Another attribute you'll see here is target. Target is used as a filter mechanism; remember the properties covered in @Component? With the target attribute, you specify a query that identifies a more specific instance of a component that you'd like to receive.  Here's one example:

@Reference( target = "(" + NotificationsPortletKeys.NOTIFICATIONS + ")", unbind = "-" ) protected void setPanelApp(PanelApp panelApp) { _panelApp = panelApp; }

The code here wants to be given an instance of a PanelApp component, but it's looking specifically for the PanelApp component tied to the notifications portlet.  Any other PanelApp component won't match the filter and won't be applied.

There are some attributes that you will sometimes find here that are pretty important, so I'm going to go into some details on those.

The first is the cardinality attribute.  The default value is ReferenceCardinality.MANDITORY, but other values are OPTIONAL, MULTIPLE, and AT_LEAST_ONE. The meanings of these are:

  • MANDITORY - The reference must be available and injected before this component will start.
  • OPTIONAL - The reference is not required for the component to start and will function w/o a component assignment.
  • MULTIPLE - Multiple resources may satisfy the reference and the component will take all of them, but like OPTIONAL the reference is not needed for the component to start.
  • AT_LEAST_ONE - Multiple resources may satisfy the reference and the component will take all of them, but at least one is manditory for the component to start.

The multiple options allow you to get multiple calls with references that match.  This really only makes sense if you are using the @Reference annotation on a setter method and, in the body of the method, were adding to a list or array.  Alternatives to this kind of thing would be to use a ServiceTracker so you wouldn't have to manage the list yourself.

The optional options allow your component to start without an assigned reference.  This kind of thing can be useful if you have a scenario where you have a circular reference issue: A references B which references C which references A.  If all three use REQUIRED, none will start because the references cannot be satisfied (only started components can be assigned as a reference).  You break the circle by having one component treat the reference as optional; then they will be able to start and references will be resolved.

The next important @Reference attribute is the policy.  Policy can be either ReferencePolicy.STATIC (the default) or ReferencePolicy.DYNAMIC.  The meanings of these are:

  • STATIC - The component will only be started when there is an assigned reference, and will not be notified of alternative services as they become available.
  • DYNAMIC - The component will start when there is reference(s) or not, and the component will accept new references as they become available.

The reference policy controls what happens after your component starts when new reference options become available.  For STATIC, new reference options are ignored and DYNAMIC your component is willing to change.

Along with the policy, another important @Reference attribute is the policyOption.  This attribute can be either ReferencePolicyOption.RELUCTANT (the default) or ReferencePolicyOption.GREEDY.  The meanings of these are:

  • RELUCTANT - For single reference cardinality, new reference potentials that become available will be ignored.  For multiple reference cardinality, new reference potentials will be bound.
  • GREEDY - As new reference potentials become available, the component will bind to them.

Whew, lots of options here, so let's talk about common groupings.

First is the default, ReferenceCardinality.MANDITORY, ReferencePolicy.STATIC and ReferencePolicyOption.RELUCTANT.  This summarizes down to your component must have only one reference service to start and regardless of new services that are started, your component is going to ignore them.  These are really good and normal defaults and promote stability for your component.

Another common grouping you'll find in the Liferay source is ReferenceCardinality.OPTIONAL or MULTIPLE, ReferencePolicy.DYNAMIC and ReferencePolicyOption.GREEDY.  In this configuration, the component will function with or without reference service(s), but the component allows for changing/adding references on the fly and wants to bind to new references when they are available.

Other combinations are possible, but you need to understand impacts to your component.  After all, when you declare a reference, you're declaring that you need some service(s) to make your component complete.  Consider how your component can react when there are no services, or what happens if your component stops because dependent service(s) are not available. Consider your perfect world scenario as well as a chaotic nightmare of redeployments, uninstalls, service gaps and identify how your component can weather the chaos.  If you can survive the chaos situation, you should be fine in the perfect world scenario.

Finally, when do you use the @Reference annotation?  When you need service(s) injected into your component from the OSGi environment.  These injections can come from your own module or from other modules in the OSGi container.  Remember that @Reference only works for OSGi components, but you can change into a component with an addition of the @Component reference.


This is a Liferay annotation used to inject a reference to a Spring bean from the Liferay core.


This is a Liferay annotation used to inject a reference from a Spring Extender module bean.

Wait! Three Reference Annotations? Which should I use?

So there they are, the three different types of reference annotations.  Rule of thumb, most of the time you're going to want to just stick with the @Reference annotation.  The Liferay core Spring beans and Spring Extender module beans are also exposed as OSGi components, so @Reference should work most of the time.

If your @Reference isn't getting injected or is null, that will be sign that you should use one of the other reference annotations.  Here your choice is easy: if the bean is from the Liferay core, use @BeanReference, but if it is from a Spring Extender module, use the @ServiceReference annotation instead.  Note that both bean and service annotations will require your component use the Spring Extender also.  For setting this up, check out any of your ServiceBuilder service modules to see how to update the build.gradle and bnd.bnd file, etc.


The @Activate annotation is OSGi's equivalent to Spring's InitializingBean interface.  It declares a method that will be invoked after the component has started.

In the Liferay source, you'll find it used with three primary method signatures:

@Activate protected void activate() { ... } @Activate protected void activate(Map<String, Object> properties) { ... } @Activate protected void activate(BundleContext bundleContext, Map<String, Object> properties) { ... }

There are other method signatures too, just search the Liferay source for @Activate and you'll find all of the different variations. Except for the no-argument activate method, they all depend on values injected by OSGi.  Note that the properties map is actually your properties from OSGi's Configuration Admin service.

When should you use @Activate? Whenever you need to complete some initialization tasks after the component is started but before it is used.  I've used it, for example, to set up and schedule Quartz jobs, verify database entities, etc.


The @Deactivate annotation is the inverse of the @Activate annotation, it identifies a method that will be invoked when the component is being deactivated.


The @Modified annotation marks the method that will be invoked when the component is modified, typically indicating that the @Reference(s) were changed.  In Liferay code, the @Modified annotation is typically bound to the same method as the @Activate annotation so the same method handles both activation and modification.


The @ProviderType comes from BND and is generally considered a complex concern to wrap your head around.  Long story greatly over-simplified, the @ProviderType is used by BND to define the version ranges assigned in the OSGi manifest in implementors and tries to restrict the range to a narrow version difference.

The idea here is to ensure that when an interface changes, the narrow version range on implementors would force implementors to update to match the new version on the interface.

When to use @ProviderType? Well, really you don't need to. You'll see this annotation scattered all through your ServiceBuilder-generated code. It's included in this list not because you need to do it, but because you'll see it and likely wonder why it is there.


This is a Liferay annotation for ServiceBuilder entity interfaces. It defines the class from the service module that implements the interface.

This won't be an interface you need to use, but at least you'll know why its there.


This is another Liferay annotation bound to ServiceBuilder service interfaces. It defines the transaction requirements for the service methods.

This is another annotation you won't be expected to use.


The @Indexable annotation is used to decorate a method which should result in an index update, typically tied to ServiceBuilder methods that add, update or delete entities.

You use the @Indexable annotation on your service implementation methods that add, update or delete indexed entities.  You'll know if your entities are indexed if you have an associated implementation for your entity.


The @SystemEvent annotation is tied to ServiceBuilder generated code which may result in system events.  System events work in concert with staging and the LAR export/import process.  For example, when a jouirnal article is deleted, this generates a SystemEvent record.  When in a staging environment and when the "Publish to Live" occurs, the delete SystemEvent ensures that the corresponding journal article from live is also deleted.

When would you use the @SystemEvent annotation? Honestly I'm not sure. With my 10 years of experience, I've never had to generate SystemEvent records or modify the publication or LAR process.  If anyone out there has had to use or modify an @SystemEvent annotation, I'd love to hear about your use case.


OSGi has an XML-based system for defining configuration details for Configuration Admin.  The @Meta annotations from the BND project allow BND to generate the file based on the annotations used in the configuration interfaces.

Important Note: In order to use the @Meta annotations, you must add the following line to your bnd.bnd file: -metatype: * If you fail to add this, your @Meta annotations will not be used when generating the XML configuration file. @Meta.OCD

This is the annotation for the "Object Class Definition" aspect, the container for the configuration details.  This annotation is used on the interface level to provide the id, name and localization details for the class definition.

When do you use this annotation? When you are defining a Configuration Admin interface that will have a panel in the System Settings control panel to configure the component.

Note that the @Meta.OCD attributes include localization settings.  This allows you to use your resource bundle to localize the configuration name, the field level details and the @ExtendedObjectClassDefinition category.


This is the annotation for the "Attribute Definition" aspect, the field level annotation to define the specification for the configuration element. The annotation is used to provide the ID, name, description, default value and other details for the field.

When do you use this annotation? To provide details about the field definition that will control how it is rendered within the System Setings configuration panel.


This is a Liferay annotation to define the category for the configuration (to identify the tab in the System Settings control panel where the configuration will be) and the scope for the configuration.

Scope can be one of the following:

  • SYSTEM - Global configuration for the entire system, will only be one configuration instance shared system wide.
  • COMPANY - Company-level configuration that will allow one configuration instance per company in the portal.
  • GROUP - Group-level (site) configuration that allows for site-level configuration instances.
  • PORTLET_INSTANCE - This is akin to portlet instance preferences for scope, there will be a separate configuration instance per portlet instance.

When will you use this annotation? Every time you use the @Meta.OCD annotation, you're going to use the @ExtendedObjectClassDefinition annotation to at least define the tab the configuration will be added to.


This is a Liferay annotation used to define the OSGi component properties used to register a Spring bean as an OSGi component. You'll find this used often in ServiceBuilder modules to expose the Spring beans into the OSGi container. Remember that ServiceBuilder is still Spring (and SpringExtender) based, so this annotation exposes those Spring beans as OSGi components.

When would you use this annotation? If you are using Spring Extender to use Spring within your module and you want to expose the Spring beans into OSGi so other modules can use the beans, you'll want to use this annotation.

I'm leaving a lot of details out of this section because the code for this annotation is extensively javadoced. Check it out:


So that's like all of the annotations I've encountered so far in Liferay 7 CE / Liferay DXP. Hopefully these details will help you in your Liferay development efforts.

Find an annotation I've missed or want some more details on those I've included? Just ask.

David H Nebinger 2017-02-02T06:14:36Z
Categories: CMS, ECM

Building an Extensible Health Check

Liferay - Thu, 02/02/2017 - 00:10

Alt Title: Cool things you can do with OSGi


So one thing that many organizations like to stand up in their Liferay environments is a "health check".  The goal is to provide a simple URL that monitoring systems can invoke to verify servers are functioning correctly.  The monitoring systems will review the time it takes to render the health check page and examine the contents to compare against known, expected results.  Should the page take too long to render or does not return the expected result, the monitoring system will begin to alert operations staff.

The goal here is to allow operations to be proactive in resolving outage situations rather than being reactive when a client or supervisor calls in to see what is wrong with the site.

Now I'm not going to deliver here a complete working health check system here (sorry in advance if you're disappointed).

What I am going to do is use this as an excuse to show how you can leverage some OSGi stuff to build out Liferay things that you really couldn't have easily done before.

Basically I'm going to build out an extensible health check system which exposes a simple URL and generates a simple HTML table that lists health check sensors and status indicators, the words GREEN, YELLOW and RED for the status of the sensors.  In case it isn't clear, GREEN is healthy, YELLOW means there is non-fatal issues, and RED means something is drastically wrong.

Extensible is the key word in the previous paragraph.  I don't want the piece rendering the HTML to have to know about all of the registered sensors.  As a developer, I want to be able to create new sensors as new systems are integrated into Liferay, etc.  I don't want to have to know about every possible sensor I'm ever going to create and deploy up front, I'll worry about adding new sensors as the need arises.

Defining The Sensor

So our health check system is going to be comprised of various sensors.  Our plan here is to follow the Unix concept of creating small, consise sensors that are each great at taking an individual sensor reading rather than one really big complicated sensor.

So to do this we're going to need to define our sensor interface:

public interface Sensor { public static final String STATUS_GREEN = "GREEN"; public static final String STATUS_RED = "RED"; public static final String STATUS_YELLOW = "YELLOW"; /** * getRunSortOrder: Returns the order that the sensor should run. Lower numbers * run before higher numbers. When two sensors have the same run sort order, they * are subsequently ordered by name. * @return int The run sort order, lower numbers run before higher numbers. */ public int getRunSortOrder(); /** * getName: Returns the name of the sensor. The name is also displayed in the HTML * for the health check report, so using human-readable names is recommended. * @return String The sensor display name. */ public String getName(); /** * getStatus: This is the meat of the sensor, this method is called to actually take * a sensor reading and return one of the status codes listed above. * @return String The sensor status. */ public String getStatus(); }

Pretty simple, huh?  We accommodate the sorting of the sensors for running so we can have control over the test order, we support providing a display name for the HTML output, and we also provide the method for actually getting the sensor status.

That's all we need to get our extensible healthcheck system started.  Now that we have the sensor interface, let's build some real sensors.

Building Sensors

Obviously we are going to be writing classes that implement the Sensor interface.  The fun part for us is that we're going to take advantage of OSGi for all of our sensor registration, bundling, etc.

So the first option we have with the sensors is whether to combine them in one module or build them as separate modules.  The truth is we really don't care.  You can stick with one module or separate modules.  You could mix things up and create multiple modules that each have multiple sensors.  You can include your sensor for your portlet directly in that module to keep it close to what the sensor is testing.  It's entirely up to you.

Our only limitations are that we have a dependency on the Healthcheck API module and our components have to implement the interface and declare themselves with the @Component annotation.

So for our first sensor, let's look at the JVM memory.  Our sensor is going to look at the % of memory used, we'll return GREEN if 60% or less is used, YELLOW if 61-80% and RED if 81% or more is used.  We'll create this guy as a separate module, too.

Our memory sensor class is:

@Component(immediate = true,service = Sensor.class) public class MemorySensor implements Sensor { public static final String NAME = "JVM Memory"; @Override public int getRunSortOrder() { // This can run at any time, it's not dependent on others. return 5; } @Override public String getName() { return NAME; } @Override public String getStatus() { // need the percent used int pct = getPercentUsed(); // if we are 60% or less, we are green. if (pct <= 60) { return STATUS_GREEN; } // if we are 61-80%, we are yellow if (pct <= 80) { return STATUS_YELLOW; } // if we are above 80%, we are red. return STATUS_RED; } protected double getTotalMemory() { double mem = Runtime.getRuntime().totalMemory(); return mem; } protected double getFreeMemory() { double mem = Runtime.getRuntime().freeMemory(); return mem; } protected double getUsedMemory() { return getTotalMemory() - getFreeMemory(); } protected int getPercentUsed() { double used = getUsedMemory(); double pct = (used / getTotalMemory()) * 100.0; return (int) Math.round(pct); } protected int getPercentAvailable() { double pct = (getFreeMemory() / getTotalMemory()) * 100.0; return (int) Math.round(pct); } }

Not very fancy.  There are obvious enhancements we could pursue with this.  We could add a configuration instance so we could define the memory thresholds in the control panel rather than using hard coded values.  We could refine the measurement to account for GC.  Whatever.  The point is we have a sensor which is responsible for getting the status and returning the status string.

Now imagine what you can do with these sensors... You can add a sensor for accessing your database(s).  You can check that LDAP is reachable.  If you use external web services, you could call them to ensure they are reachable (even better if they, too, have some sort of health check facility, your health check can incorporate their health check).

Your sensor options are only limited to what you are capable of creating.

I'd recommend keeping the sensors simple and fast, you don't want a long running sensor chewing up time/cpu just to get some idea of server health.

Building The Sensor Manager

The sensor manager is another key part of our extensible healthcheck system.

The sensor manager is going to use a ServiceTracker so it knows all the sensors that are available and gracefully handles the addition and removal of new Sensor components.  Here's the SensorManager:

@Component(immediate = true, service = SensorManager.class) public class SensorManager { /** * getHealthStatuses: Returns the map of current health statuses. * @return Map map of statuses, key is the sensor name and value is the sensor status. */ public Map<String,String> getHealthStatus() { StopWatch totalWatch = null; // time the total health check if (_log.isDebugEnabled()) { totalWatch = new StopWatch(); totalWatch.start(); } // grab the list of sensors from our service tracker List<Sensor> sensors = _serviceTracker.getSortedServices(); // create a map to hold the sensor status results Map<String,String> statuses = new HashMap<>(); // if we have at least one sensor if ((sensors != null) && (! sensors.isEmpty())) { String status; StopWatch sensorWatch = null; // create a stopwatch to time the sensors if (_log.isDebugEnabled()) { sensorWatch = new StopWatch(); } // for each registered sensor for (Sensor sensor : sensors) { // reset the stopwatch for the run if (_log.isDebugEnabled()) { sensorWatch.reset(); sensorWatch.start(); } // get the status from the sensor status = sensor.getStatus(); // add the sensor and status to the map statuses.put(sensor.getName(), status); // report sensor run time if (_log.isDebugEnabled()) { sensorWatch.stop(); _log.debug("Sensor [" + sensor.getName() + "] run time: " + DurationFormatUtils.formatDurationWords(sensorWatch.getTime(), true, true)); } } } // report health check run time if (_log.isDebugEnabled()) { totalWatch.stop(); _log.debug("Health check run time: " + DurationFormatUtils.formatDurationWords(totalWatch.getTime(), true, true)); } // return the status map return statuses; } @Activate protected void activate(BundleContext bundleContext, Map properties) { // if we have a current service tracker (likely not), let's close it. if (_serviceTracker != null) { _serviceTracker.close(); } // create a new sorting service tracker. _serviceTracker = new SortingServiceTracker(bundleContext, Sensor.class.getName(), new Comparator<Sensor>() { @Override public int compare(Sensor o1, Sensor o2) { // compare method to sort primarily on run order and secondarily on name. if ((o1 == null) && (o2 == null)) return 0; if (o1 == null) return -1; if (o2 == null) return 1; if (o1.getRunSortOrder() != o2.getRunSortOrder()) { return o1.getRunSortOrder() - o2.getRunSortOrder(); } return o1.getName().compareTo(o2.getName()); } }); } @Deactivate protected void deactivate() { if (_serviceTracker != null) { _serviceTracker.close(); } } private SortingServiceTracker<Sensor> _serviceTracker; private static final Log _log = LogFactoryUtil.getLog(SensorManager.class); }

The SensorManager has the ServiceTracker instance to retrieve the list of registered Sensor services and uses the list to grab each sensor status.  The getHealthStatus() method is the utility method to hide all of the implementation details but expose the ability to grab the map of sensor status details.


Yep, that's right, this is the conclusion.  That's really all there is to see here.

I mean, there is more, you need a portlet to serve up the health status on demand (a serve resource request can work fine here), and just displaying the health status in the portlet view will allow admins to see the health whenever they log into the portal.  And you can add a servlet so external monitoring systems can hit your status page using /o/healthcheck/status (my checked in project supports this).

But yeah, that's not really important with respect to showing cool OSGi stuff.

Ideally this becomes a platform for you to build out an expandable health check system in your own environment.  Pull down the project, start writing your own Sensor implementations and check out the results.

If you build some cool sensors you want to share, send me a PR and I'll add them to the project.

In fact, let's consider this to be like a community project.  If you use it and find issues, feel free to submit PRs with fixes.  If you build some Sensors, submit a PR with them.  If you come up with a cool enhancement, send a PR.  I'll do some minimal verification and merge everything in.

Here's the github project link to get you started:

Alt Conclusion

Just like there's an alternate title, there's an alternate conclusion.

The alternate conclusion here is that there's some really cool things you can do when you embrace OSGi in Liferay, pretty much the way Liferay has embraced OSGi.

OSGi offers a way to build expandable systems that are very decoupled.  If you need this kind of expansion, focus on separating your API from your implementations, then use a ServiceTracker to access all available instances.

Liferay uses this kind of thing extensively.  The product menu is extensible this way, the My Account pages are extensible in this way, heck even the LiferayMVC portlet implementations using MVCActionCommand and MVCResourceCommand interfaces rely on the power of OSGi to handle the dynamic services.

LiferayMVC is actually an interesting example; there, instead of managing a service tracker list, they manage a service tracker map where the key is the MVC command.  So the LiferayMVC portlet uses the incoming MVC command to get the service instance based on the command and passes the control to it for processing.  This makes the portlet more extensible because anyone can add a new command or override an existing command (using service ranking) and the original portlet module doesn't need to be touched at all.

Where can you find examples of things you can do leveraging OSGi concepts?  The Liferay source, of course.  Liferay eats it's own dog food, and they do a lot more with OSGi than I've ever needed to.  If you have some idea of a thing to do that benefits from an OSGi implementation but need an example of how to do it, find something in the Liferay source that has a similar implementation and see how they did it.

David H Nebinger 2017-02-02T05:10:20Z
Categories: CMS, ECM

A Content-Driven Approach To Building Powerful And Flexible Wizards

Liferay - Mon, 01/30/2017 - 11:43
Let me begin by clarifying that this post has nothing to do with the Harry Potter universe.   But seriously. You know what I mean by wizards, don’t you? Those helpful series of screens that gather a set of choices from the user and then use the captured choices to do something for the user. Often times, one user selection can lead to a different outcome that the other choices on the same screen.   It turns out I am faced with a requirement to build such a wizard. And the requirements are a little bit amorphous. So I asked myself the same question I have come to ask before embarking on a multi-layered application development effort: Can I do accomplish this with the content management system?   Or more verbosely stated as:  Are all the layers I need to accomplish this already there for me?   Well, the answer is yes. It can be done using the CMS. And it can be powerful and flexible.    Here are my requirements for a contrived Mood Wizard:
  • Screen 1: Ask user to pick their position (you know: standing, sitting, etc). Picking an option on Screen 1 takes them to Screen 2.
  • Screen 2: Ask user how they feel (happy, sad, angry, etc.). Picking an option on Screen 2 takes them to Screen 3 unless they pick ‘angry’, in which case they are taken to Screen 4.
  • Screen 3: Ask user to choose an energy level. Picking an option on Screen 3 takes user to Screen 999 (the final screen).
  • Screen 4: User types up a rant and hits Continue to be taken to Screen 999 (the final screen).
  • Screen 999: This is the final screen. User can click the Go button to do something with their choices.
  • At any point during use of the wizard, the user can back up a screen.
  • At any point during use of the wizard, the user can start over.
  • At any point during use of the wizard, the user can see what choices they’ve made from screens that had choices (aka options) rather than custom markup.
  Here are the moving parts of my solution:
  • A structure
  • A template
  • A Dynamic Data List definition (to house rants from angry visitors)
  • A servlet to call into the DynamicDataListRecord API (because it’s a great persistence mechanism with a clean API and is one of the goodies that Liferay comes with). 
Here is the structure I defined:       Some interesting things to note here:
  • The screens are repeatable.
  • The options within a screen are repeatable.
  • Every option has a Target Screen Identifier - the identifier of the screen to load when the option is selected. it also has a Target URL in case picking it needs to change the page location.
That simplicity to me is POWER.   You may note that I also have a Custom Markup for each screen and a Target Screen Identifier for it. This allows for the definition of custom markup and a way to configure the next screen to go to if Custom Markup is adopted for any given screen. In my contrived use case, Custom Markup is perfect to capture the user’s rant if they were to pick Angry.   And that gives me the FLEXIBILITY I need.   Here is the XML definition of the above structure. And here is the Velocity template I wrote for it.   For screen 2, I capture some user information (an email address and a rant) and post it to my VisitorRantServlet that saves it to a DDL record-set. The user can setup an instance of the stock Dynamic Data List Display portlet portlet to view/edit the submitted records.   Here's a live demo (when the instance is up). Here are some screenshots if the above link doesn't work. The Web Content Display portlet on top shows the content item I created using the previously define structure and template. The Dynamic Data List Display portlet below it shows any rants added.         There are lots of possibilities here. Here's a few:
  • Any screen can use custom markup to load any data from a servlet, REST service and such, in much the same way that I use it to capture a visitor’s rant.
  • The template is wide open to code in any extra customization. (You know you’ll need it.)
  • We have the robustness of content versioning.
  • I do wish we had the added robustness of template (and even structure) versioning :-(.
By the way, you don't need a servlet to call into the DDL Record Set API. JSONWS makes it a snap but there is the inherent complexity of having to deal with basic authentication. The raw API lets you run any trusted code on the server and was my preference for the purpose of this demo.   My thanks to Allen Ziegenfus for sharing this tidbit here that came to good use.   Code on Github Javeed Chida 2017-01-30T16:43:41Z
Categories: CMS, ECM

Service Builder Column Naming Randomness

Liferay - Wed, 01/25/2017 - 11:49

I fell over a problem the other day that I had solved ages ago and foolishly hadn't taken the time to write down how I did it. I guess it must be fairly rare as my standard "I might have forgotten to write it down, but someone must have" approach to finding the answer on the internet also failed. So, I have committed to solving both issues with one post.

I'm running LifeRay 6.2 and SP17 of the SDK. I'd been building my service layer happily for weeks but after some additional entities went in it just stopped working, kept logging out:


[echo] Writing C:\liferay-dev\liferay-developer-studio\liferay-plugins-sdk-6.2-ee\portlets\RandomObject-portlet\docroot\WEB-INF\src\uk\ac\uea\portlet\randomobject\service\persistence\ [echo] java.lang.NullPointerExceptionWriting C:\liferay-dev\liferay-developer-studio\liferay-plugins-sdk-6.2-ee\portlets\RandomObject-portlet\docroot\WEB-INF\src\uk\ac\uea\portlet\randomobject\model\impl\ [echo] at [echo] at

So after a lot of digging (and understanding that the error doesn't bare any direct relation to the problem) it turns out that if you put the word "get", even as a substring, into the name of any SB column, so in my example:


<column name="showWidget" type="boolean"></column>

SB really doesn't like it.

Hopefully this is of use to someone as confused as I was.

Alex Swain 2017-01-25T16:49:30Z
Categories: CMS, ECM

Liferay DXP(EE) Only, Visualize your workflow.

Liferay - Tue, 01/24/2017 - 17:35

As a supervisor of my department, some times I want to check how much progress of a work is done, who is working on a certain trading? who is reviewing a loan application? If a new business plan application is getting stuck for 2 weeks who is in charge of that? I want more information than a simple word pending...


With workflow out of the box, you can't review any certain workflow's execution log without being assigned. However, the workflow has all the data you need, just need a little effort to achieve it.


In Liferay DXP there's a plugin called Kaleo Designer. You can design your workflow on a graphic panel by dragging and dropping nodes like tasks, states, conditions and transitions conveniently.

I see this plugin more than a designer, it can actually display any workflow definition. With integrating with log it can highlight the nodes with different color for different information. So you can display your workflow for each workflow instance like this:

Right now this kind of customization can only be done on Liferay DXP(EE).

If you are a DXP subscription client, please feel free to contact me, I would donate my solution for free.

Neil Jin 2017-01-24T22:35:24Z
Categories: CMS, ECM

Liferay Freemarker Tips and Tricks: Date & Time

Liferay - Mon, 01/23/2017 - 15:46

I've been working with Liferay for quite some time now, but I must confess that I still haven't really made the switch from Velocity to Freemarker for my templates. Even though I know there are a lot of benefits to using Freemarker, like better error messages, the Velocity knowledge and library of snippets I've build up through the years is hard to give up. But with the advent of Liferay DXP it now seems the perfect time to make the switch. 

While working on a problem today, where an asset's (a document) modified date wasn't updated when you checkout/checkin a new version, I had to change something in a Freemarker display template a colleague made. At first, before I knew there was a problem in Liferay, I thought the issue was that the template wasn't providing the timezone to the dateUtil call in the template:

<ul> <#list entries as entry> <#assign dateFormat = "MMM d, HH':'mm" /> <li>${entry.getTitle()} - ${dateUtil.getDate(entry.getModifiedDate(), dateFormat, locale)}</li> </#list> </ul>

So this looked like the perfect time to see how to fix this and see if any improvements could be made. I first started off with fixing the line as-is and just added the timeZone (which is already present in the template context - see com.liferay.portal.template.TemplateContextHelper).

<ul> <#list entries as entry> <#assign dateFormat = "MMM d, HH':'mm" /> <li>${entry.getTitle()} - ${dateUtil.getDate(entry.getModifiedDate(), dateFormat, locale, timeZone)}</li> </#list> </ul>

While this does the trick and produces the correct datetime in the timezone we want, it did look a bit verbose. So I wondered if Freemarker had some stuff that might make it shorter/sweeter/better. After some looking around I found these 2 things: built-in date formatting and processing settings

The first would allow us to drop the dateUtil, but doesn't seem to have a provision for providing a locale and/or timezone. This is where the second article comes in. These processing settings allow us to set some stuff that further Freemarker processing will take into account and luckily for us the datetime stuff is one of those things. So with the combination of both our template becomes:

<#setting time_zone=timeZone.ID> <#setting locale=locale.toString()> <#setting datetime_format="MMM d, HH':'mm"> <ul> <#list entries as entry> <li><li>${entry.title} - ${entry.modifiedDate?datetime}</li></li> </#list> </ul>

So now you can see that we can just set up the processor beforehand to use the Liferay timezone and locale, but also our chosen datetime format. This allows us to then directly use the Freemarker ?datetime function on the date field of the asset entry. This will also apply to any further dates you want to print in this template using ?datetime (or ?date or ?time). As this only applies to one processor run, you can also have different templates, that set these processor settings differently, on the same page without them interfering with each other. The following screenshot shows the template above and the same template where the timezone, locale and datetime format are set differently:

The beauty and ease of use of this small improvement has already made me change my mind and hopefully I can write some more Freemarker related blog posts in the future.

Jan Eerdekens 2017-01-23T20:46:33Z
Categories: CMS, ECM
Syndicate content