AdoptOS

Assistance with Open Source adoption

ECM

Liferay Hardening for Production Environment

Liferay - Fri, 08/26/2016 - 14:22

The basic bundle that liferay provides with different Application servers are pretty straight forward to install and setup. So that using them one can easily get started with a working liferay portal. Actually the default settings that liferay provides is ideally for developers. Now the question is, whether it’s also safe to start using the same thing for production as well? Well the answer is no, because it would not be secure to use that default settings of liferay bundle. That doesn’t mean that Liferay is not secured, it simply means that one should not deploy liferay with default settings and assume that it’s secure. Here we would try to accumulate points using which we can harden our liferay’s security for production. Now when I say security or securing the server by restricting access to certain things in liferay, it might not be right to generalize these points. What adds security for one ruins a feature for somebody else. Over here I would try to point out certain areas or options that liferay provides. 

Assumptions:

It’s a good practice to keep a web server infront of Tomcat/jboss bundled with Liferay. Let’s call it Apache front servers (for example Apache httpd). Another good practice is to use the staging live feature of Liferay, so that content can be easily published to the production. But it also depends on the need and purpose of the product or project. The advantage of using this feature is to keep the staging environment to internal network and the live environment for external network. In this way lot of things can be blocked for the Live environment but can be open in the staging environment. For this blog we will be assuming that we are following the above assumptions.

1.     Default admin user: The default admin user test@liferay.com needs to be deactivated or removed and a custom new admin user should be created.

default.admin.* property in portal.properties file needs to be checked.

2.     Default Pages or Urls blocked by Front server: Undesired pages can be blocked by Apache Web Server (Apache Front Server). With this technique the pages can be still accessed from inside of the client network. For example : Live portal admin pages can be accessed from inside of client network to carry out some crucial administrative tasks (for example re indexing search) but are completely blocked for outside world.

Undesired pages can be blocked in the portal. This is done by disabling the feature for some roles with permissions or disabling the page completely with portal properties.

For example:  There can be more such urls, this is just a starting point, based on the requirement urls can be blocked.

Page/segment

Staging

Live

Purpose of the Page

Technique

/c/

Enabled

Disabled

Liferay main servlet urls (e.g Liferay login page)

Blocked by front servers from outside of client network. In config file similar entries can be added:

RewriteRule ^.*/c/.*$ /XYZ-theme-theme/html/e404.html [L]

/api/

Enabled

Disabled

Liferay external APIs

Blocked by front servers from outside of client network.

/user/

Disabled

Disabled

Public user personal site pages (Depends on the requirement)

Blocked by front servers from outside of client network.

/group/

Enabled

Disabled

Private site pages including control panel pages. The control panel pages are used for content management and portal administration.

In staging the control panel access is needed for content management and portal administration tasks.

 

Apart from default pages there are non-unique pages (different site specific page) as well. Different way of accessing pages:

·       Using web/{site name}/{page name} – needed for sites which does not have virtual hosts name. However the main site should not be used this way.

·       Using layout Ids in URL: For example /c/portal/layout?p_I_id=<plid>, where <plid> is the layout id. This shall be blocked from the front server. (rule that blocks all c/.* pages )

·       Using /group/{site name}/{page name} – None of the pages should be accessible this way.

·       Using locale or language URL prefix: For example: /sv_SE/web/{site_name}/{page_name}, These urls shall be blocked /redirected from front server.

3.     Default pages or sites - blocked by Portal: Depending on the requirement the Personal or Public site for the user can be disabled using portal-ext.properties. These are the properties using which it can be disabled :

layout.user.public.layouts.enabled=false

layout.user.private.layouts.enabled=false

Apart from this site the default site “/web/guest/” should also be disabled or removed from the portal.

4.     Default Portlets: There is a group of default portlets that can be accessed from any public page in the portal by appending proper parameters to the end of the URL. For example, login portlet can be accessed in the following way.

?p_p_id=58&p_p_lifecycle=0&p_p_state=maximized&p_p_mode=view&saveLastPath=false&_58_struts_action=%2Flogin%2Flogin

All portlets must not be accessible from the public in this manner. The prevention shall be done in the following two ways:

a.     In front servers by redirecting all requests that have p_p_id=x in the URL, where x portlet id which is not whitelisted.

b.     Disable undesired portlets in the live portal completely by removing their definition in portal’s liferay-portlet.xml (create external plugin with liferay-portlet-ext.xml file). In the staging portal, the removal of portlets is not necessary.

 

 

5.     Remote Services:  Undesired Remote services can be disabled with invoker IP filtering so that the service APIs cannot be called outside of the local machine. Our suggestion is to disable all external APIs with this filtering mechanism in the live portal. In addition, APIs that are not needed in staging either should be disabled there as well.

Interface

Disabled/Enabled in staging

Disabled/Enabled in live

Purpose of the API

Technique

 Axis Servlet

Depends on requirement

Disable

Liferay’s default SOAP web service

Setting properties

axis.servlet.hosts.allowed

axis.servlet.https.required

Liferay Tunnel Servlet

 

Disable

Liferay Tunnel Servlet

Setting properties

tunnel.servlet.hosts.allowed

tunnel.servlet.https.required

tunneling.servlet.shared.secret

Spring Remoting Servlet

 

Disable

Spring Remoting Servlet

Setting properties

spring.remoting.servlet.hosts.allowed

spring.remoting.servlet.https.required

JSON Servlet

 

Disable

JSON Servlet

Setting properties

json.servlet.hosts.allowed

json.servlet.https.required

JSON Web Service Servlet

 

Disable

Liferay’s default JSON web service

Setting properties

jsonws.servlet.hosts.allowed

jsonws.servlet.https.required

WebDAV Servlet

 

Disable

WebDAV provides functionality to create, change and move documents in document library on a remote server.

Setting properties

  • webdav.servlet.hosts.allowed
  • webdav.servlet.https.required

6.     Disabling default servlets: Liferay offers a number of servlets that are by default open to be used. It should be the policy that any servlet that is not needed, is disabled/removed from the portal. Liferay support instructions for disabling servlets. For example: Disabling it through portal-ext.properties file 

# The audit filter populates the AuditRequestThreadLocal with # the appropriate request values to generate audit requests.    

#     com.liferay.portal.servlet.filters.audit.AuditFilter=false

    Apart from the servlet filters there are several other SSO hooks which are present in the portal properties file. Those     are required to be disabled or removed from there, if those are not getting used.

    7.     Struts Action:  Struts actions could be handled in one of the following ways (to be decided).

 

a.     Undesired struts actions can be disabled in the portal by deleting them from struts-config.xml file

b.     Undesired struts actions can be blocked by the front servers with regular expression type of filtering and redirecting

e.g. if URL contains a string "/portal/login" then the page is redirected to e.g. front page of the site

c.     Disabling corresponding portlets in the portal.

Full list of struts actions can be found here https://github.com/liferay/liferay-portal/blob/master/portal-web/docroot/WEB-INF/struts-config.xml. The disabling can be done by modifying this file.

Note: According to Liferay support "Modifying this file by deleting some of the actions is not advised at all, and can cause the crush of your portal! You can override or add different struts-actions, but deleting them is a dangerous maneuver".

   8.     Portal.properties: People working with liferay we all know how important this file is when we think of configuring    our portal. There are more than 1000 odd entries in this property file. Going through all of them here is not an option.      One can check details of this property file in a more readable format from here. We will try to share some of the         important properties related to security over here:

  • Jdbc.default.password: Password kept in clear text is not a good practice for production environment. JNDI name can be used to connect to a database. Jdbc.default.jndi.name is the required key.
  • *.auth.enabled: This determines what kind of login we are allowing on our portal. It can be ldap, OpenID, Facebook etc.  For example: ldap.auth.enabled
  • Password.encryption.algorithm: It’s better to use a password encryption algorithm which would be tough to break. Example: SSHA
  • com.liferay.portal.servlet.filters.*: There are loads of filters in liferay by default. They can be disabled if those are not getting used.

This list goes on like this but the best way to find or learn about the portal.properties file is by going through it. One can search the property file using the following keywords to get more informations related to security: ‘secur’, ‘security’, ‘passowrd’, ‘encryption’, ‘auth’, ‘timeout’, ‘hash’, ‘encrypt’ etc.

This list goes on and on. As there is no end to securing your portal. Well security isn’t a state it’s a process. 

Samujjwal Sahu 2016-08-26T19:22:02Z
Categories: CMS, ECM

New themes suite (2/4) - Westeros Bank theme

Liferay - Tue, 08/16/2016 - 06:59

Hi everyone!

 

A week ago we started the new serie of blog entries about our new themes for Liferay 7. Today, we gladly present you our second theme: Westeros Bank:

 

 

"A bank case"


The Westeros Bank theme has been developed to showcase a realistic banking project. Sites like this are usually very complex. With this in mind, we have tried to build the simplest common mechanisms to provide a solid foundation for you to get started and apply your own patterns and final touches.


This theme includes some cool features such as:


  • Four different type of structured content to create a cool layout

  • Four navigation placeholders for a finely grained navigation experience and customization

 

 

  • The device visibility of those navigation placeholders can be configured from Theme settings

 

Soon in the marketplace

 

The Westeros Bank theme will soon be published in the marketplace.

 

In the meantime, you can give it a try using this westeros-bank-theme.war file or build and deploy it yourself using the sources at https://github.com/liferay/liferay-portal/tree/master/modules/apps/foundation/frontend-theme/frontend-theme-westeros-bank

Marcos Castro 2016-08-16T11:59:32Z
Categories: CMS, ECM

CounterLocalService - Solución elegante a un problema habitual

Liferay - Mon, 08/15/2016 - 15:38

No te ha pasado nunca que usas algo habitualmente que te descuadra, que tiene algo que no entiendes del todo. Pues esa ha sido mi experiencia con el servicio de Counter de Liferay. Por ello me he sumergido en el código para entender el funcionamiento completo y así poder dormir tranquilo de nuevo 

¿Para qué sirve?

El servicio Counter nos permite obtener el ID de forma automática para una nueva Entidad al guardarla en BBDD. Más concretamente al definir una nueva entidad con Service Builder en Liferay suele ser habitual no añadir ningún tipo de secuencia o autoincremento a la clave primaria de nuestra Entidad (esto es debido principalmente a que no todas las BBDD cuentan con todas estas opciones).

Al no tener autoincremente de ningún tipo en la clave primaria de nuestra Entidad necesitamos algún método para generar ese ID y no asignar el mismo número a dos entidades distintas.

Para ello se utiliza habitualmente el servicio Counter para obtener este número. Más concretamente se suele utilizar CounterLocalServiceUtil.increment(miClase.class.getName()). Este método devuelve un identificador distinto cada vez que se invoca y nunca se repite. Parece magia, su funcionamiento es bastante simple.

 ¿Que ves?

Una vez que conoces esto (y sobre todo para hacer pruebas) sueles buscar la tabla de BBDD donde se almacena el número por le que va para tu entidad. Rápidamente localizas la tabla counter que contiene las columnas name y currentId. En la columna name se almacena la clase para la que se guarda el número actual (en nuestro caso el nombre canónico de miClase) y la columna currenId almacena el número actual.

Ya aquí las cosas empiezan a no cuadrar, ya que ves que para tu clase el currentId vale 200 y la última entidad guardad tiene el ID 210.

Peor se pone cuando de forma (aparentemente aleatoria) los ID's pegan saltos de 100 números. 

De esta forma nunca estás seguro de que número asignar al currentId en BBDD cuando cargas entidades a mano o que ID poner en tus nuevas entidades insertadas manualmente.

Pues todo esto tiene un sentido y ahora verás cual es.

¿Cómo funciona?

Una vez que accedes al código fuente de la implementación del servicio y profundizas un poco en el descubres que el funcionamiento es mucho más simple del que parece.

Cuando arranca el servidor de Liferay este crea una clase singleton donde almacenará un mapa con todos los currentId utilizables (este mapa estará vacío en el arranque). En dicho mapa se almacena también el currentId base (que es el obtenido de BBDD) y el incremento (vamos a llamarlo) por salto.

Una vez que se llama al método CounterLocalServiceUtil.increment para nuestra clase se accede al mapa en busca del currentId para esa clase.

  • Si no se encuentra dicha clave el servicio accede a BBDD en busca el currentId para dicha clase en la tabla counter. Una vez localizado el currentId actual en BBDD se obtiene y se incrementa en función de la base del incremento (es un segundo parámetro que admite el método). Normalmente para un incremento base de 1 el currentId inicial se aumenta en 100. Una vez que se añade el nuevo currentId al mapa de la clase singleton se devuelve el valor actual.
  • Si se encuentra dicha clave se incrementa el currentId obtenido del mapa en función del incremento base y se devuelve. En este punto se comprueba si el nuevo currentId es mayor o igual que el currentId base + incremento por salto. En caso afirmativo se accede a BBDD y se actualiza el currentId de la fila en cuestión (asignando el actual currentId devuelto por el método).

Imagino que explicado de esta forma puede parecer un poco lioso. Veamos un ejemplo con números.

Supongamos que tenemos una entidad propia llamada miClase, para la que hay 120 entidades cuyos ID's van correlativos del 1 al 121. Como hemos utilizando previamente el servicio Counter en la tabla counter tendremos una fila con el nombre canónico de nuestra clase y el valor 100 en el currentId.

Al arrancar nuestro Liferay este crear el singleton con el mapa vacío y al llamar al incremento para nuestra clase (para almacenar una nueva entidad) este incrementa el currentId de BBDD de 100 a 200 y devuelve el ID 200 a nuestra nueva entidad.

Al llamar de nuevo al incremento para una nueva entidad, el método nos devolverá el siguiente ID (201) sin tocar la BBDD. Y eso ocurrirá así para todos los números hasta llegar al 300. En caso de solicitar el incremento y este llegar al 300, ese mismo número se almacenará el BBDD.

Así en caso de reiniciar el servidor de Liferay, este volverá a incrementar a 400 el currentId de nuestra clase en su primera petición.

Conclusiones

Con este diseño se ha conseguido un gran rendimiento en una operación que por su frecuencia de uso es crítica (influye en todas las operaciones de creación de objetos en BBDD). Ya que el valor del ID actual de cada clase lo mantiene en memoria (en la clase singleton) y solo accede a BBDD la primera vez (para cargarlo) y una vez de cada X (en función del incremento base y del salto) para actualizar la BBDD.

El único inconveniente de este sistema es que se pierden varios números cada vez que se reinicia el servidor.

Como reza el título, una solución muy elegante a un pequeño problema de rendimiento jeje.

Ignacio Roncero Bazarra 2016-08-15T20:38:39Z
Categories: CMS, ECM

Liferay Portal 7.0 CE GA3 Release

Liferay - Mon, 08/15/2016 - 13:37

I am pleased to announce the release of:  Liferay Portal 7.0 CE GA3! 
[Download Now]

What's New
  • Overriding LPKG Files: A while back I wrote a blog entry on patching OSGi modules. The original process wasn't very elegant because it involved altering the out of the box .lpkg files. GA3 now includes a way to override the out of the box modules without altering the .lpkg files. See the updated blog entry on patching modules for more details.
  • Fixes: Liferay Portal 7.0 CE GA3 contains many fixes.  For complete list see here.
Release Nomenclature

Following Liferay's versioning scheme established in 2010, this release is Liferay 7.0 CE GA3.  The internal version number is 7.0.2 (i.e. the third release of 7.0).  Future CE releases of 7.0 will be designated GA4, GA5, .. and so on.  See below for upgrade instructions from 6.1, 6.0, and 5.x.

Downloads

You can find the 7.0 release on the usual downloads page. 

Source Code

As Liferay is an open source project, many of you will want to get at its guts. The source is available as a zip archive on the downloads page, or on its home on GitHub. Many community contributions went into this release, and hopefully many more in future releases! If you're interested in contributing, take a look at our contribution page.

Support Matrix

Liferay's general policy is to update our support matrix for each release, testing Liferay against newer major releases of supporting operating systems, app servers, browsers, and databases (we regularly update the bundled upstream open source libraries to fix bugs or take advantage of new features in the open source we depend on). 

Liferay 7.0 CE GA3 was tested extensively for use with the following Application/Database Servers: 

Liferay CE Application Servers:

  • Apache Tomcat 8.0 with Java 8
  • Wildfly 10.0 with Java 8

Liferay CE Databases:

  • HSQLDB 2 (only for demonstration, development, and testing)
  • MySQL 5.6
  • MariaDB 10
  • PostgreSQL 9.3

Documentation

The Liferay Documentation Team has been hard at work updating all of the documentation for the new release.  This includes updated (and vastly improved/enlarged) javadoc and related reference documentation, and updated installation and development documentation can be found on the Liferay Developer Network. Our community has been instrumental in identifying the areas of improvement, and we are constantly updating the documentation to fill in any gaps.

Bug Reporting

As always, the project continues to use issues.liferay.com to report and manage bug and feature requests.  If you believe you have encountered a bug in the new release (shocking, I know), please be cognizant of the bug reporting standards and report your issue on issues.liferay.com, selecting the "7.0.0 CE GA3" release as the value for the "Affects Version/s" field.

Upgrading

Good news for those of you on 6.0 or prior! Liferay introduced the seamless upgrade feature with Liferay 6.1. Seamless upgrades allow Liferay to be upgraded more easily. In most cases, pointing the latest version of Liferay to the database of the older version is enough. There are some caveats though, so be sure to check out the Upgrading section on the Liferay Developer Network for more detail on upgrading to 7.0.

Getting Support

Support for Liferay 7.0 CE comes from the wonderful and active community, from which Liferay itself was nurtured into the enterprise offering it is today.  Please visit the community pages to find out more about the myriad avenues through which you can get your questions answered.

Liferay and its worldwide partner network also provides services, support, training, and consulting around its flagship enterprise offering, which is due to be released shortly after this CE release.

Also note that customers on existing releases such as 6.1 and 6.2 continue to be professionally supported, and the documentation, source, and other ancillary data about these releases will remain in place.

What's Next?

Of course we in the Liferay Community are interested in your take on the new features in Liferay 7.0.  Work has already begun on the next evolution of Liferay, based on user feedback and community ideas.  If you are interested in learning more about how you can get involved, visit the Liferay Community pages and dig in.

Kudos!

This release was produced by Liferay's worldwide portal engineering team, and involved many hours of development, testing, writing documentation, translating, testing some more, and working with the wider Liferay community of customers, partners, and open source developers to incorporate all sorts of contributions, both big and small. We are glad you have chosen to use Liferay, and hope that it meets or exceeds your expectations!

Jamie Sammons 2016-08-15T18:37:11Z
Categories: CMS, ECM

The Glorious Simplicity of Structure Inheritance

Liferay - Sat, 08/13/2016 - 17:30

The Glorious Simplicity of Structure Inheritance

This field (highlighted yellow) on a structure editing screen is what I am talking about.

When I first saw the words Parent Structure there, the following thoughts came to mind, in increasing order of coolness

  1. All fields in the parent structure are inherited by the child structure

  2. The idea of polymorphic content, i.e. content of one type (i.e. structure) Y that is a child of type X, is also of type X. This is in essence the IS-A relationship in object oriented programming.

  3. Asset Publisher would respect above polymorphic behavior so I can simply query for content of the parent type and get back all content tied to that structure as well as all content tied to child structures.

  4. The idea that a child structure would automatically be serviced by the CMS template that serves its parent.

If you guessed that all of that is true, then you would be… wrong. The only point that holds true is the first: a structure simply inherits the fields of its parent structure (if one is specified).

That’s it! Done!

Now, this sort of obviates the need for the rest of this post. But if you’re like me, you want to try stuff out and push the limits a bit to see how stringent things really are, and what the actual payoff is, in practice, of using parent structures. So that is what follows.

Important: There is no IS-A relationship between a child structure and its parent. These are not classes - they are simply XML definitions complying with Liferay’s DDM structure schema. The child structure takes all fields that its parent structure offers, and then cuts the cord in all its individualistic glory, refusing to be identified as an extension of its parent for practical purposes. The only evidence of the relationship is in that parent-structure association observed in the child structure definition. I’ll demonstrate this from an Asset Publisher perspective in a bit.

Let’s have a parent

Okay. Time for something concrete.

I have my classic Poem structure - graphic shown below.

I use the above structure for all my classical poem forms. But now, I have a burning requirement to have a separate structure for my ecphrastic poems.

Ecphrasis

noun  ek·phra·sis \ˈek-frə-səs\

a literary description of or commentary on a visual work of art

My new structure needs all the fields that my existing Poem structure already has. In addition, it has two fields:

  • Picture (to capture an image of the painting or photograph that inspired the poem)

  • Citation (text related to the picture such as name of work, name of artist, etc.)

One option would be to redefine all the fields from the Poem structure in our new structure, which is actually very simple and quick to do. Simply go to the Source view of the structure, select all, copy and paste into the Source view of the new structure. But that is not the point here. We want to follow the DRY principle as much as possible so that we have common fields in one place, thus avoiding a proliferation of repetitive data that really ought to be maintained in one place. Remember, templates are going to be working with the fields of a structure, and templates are stubborn little creatures that fuss about field names and such, so following a DRY approach can get to be quite important from a maintainability standpoint.

Anyway, no TA-DA moment here. That Parent Structure field that every structure offers helps us out. This is what the new structure looks like now.

 

Note how the parent structure is selected as Poem. And that the new structure, Ecphrastic, has only two fields: Picture and Citation. So, when you go to create content of type Ecphrastic, you see something like this:

  New structure needs new template

A template can be associated with one and only one structure. So a new template is needed by content that wishes to use your new structure. Of course, you could go to the template used for the parent structure and select the structure in the template definition to be the new one you created, but then it would cease to service content associated with the parent structure.

You will need a new template for your new structure. This is not a problem. Simply replicate your old template and add in the presentation parts to show the new fields. You may also consider import the existing template if the new fields just need to be tagged on to the end or beginning. Do whatever makes sense.

So, here’s my CMS template to service my new Ecphrastic structure.

#parse ("$templatesPath/$selectMood.getData()") <h3>$txtPoemTitle.getData()</h3> <div class="poem-body"> $htmlVerse.getData() </div> <div style="clear:both"></div> <div class="about-poem"> $htmlAbout.getData() <br/> <img alt="Picture" src="$imgPicture.getData()" /> <br/> <i>$htmlCitation.getData()</i> </div>  

The newly added presentation is highlighted. Basically we show the picture right-aligned and the citation text right below it. This is how our content looks now when the template renders it.

Ugly, I know; but in a beautiful sort of way, yes?

Asset Publisher is just being Asset Publisher

If you wanted to configure an Asset Publisher to bring your content back, you would literally have to configure it so as to bring back content matching both parent and child structure types. As far as consumers of content are concerned, each structure is a first class structure in its own right. The structure inheritance is just cleverness to keep from repeating common fields.

Here’s a snapshot of what an Asset Publisher configuration might look like.

Now, if you wanted to display a mix of content of types Poem and Ecphrastic, you would have to work that into your Application Display Template. The ADT does not really care about the relationships between structures - you are working with content within an ADT - the actual asset. In an ADT, you just use XPATH syntax to find the fields you want using their corresponding names. If the field is found in the content, you retrieve it, else you get null.

Here is the ADT I wrote to show the results of the above configured Asset Publisher instance.

#set ($vocabularyLocalService = $serviceLocator.findService("com.liferay.portlet.asset.service.AssetVocabularyLocalService")) #set ($vocabularies = $vocabularyLocalService.getGroupVocabularies($getterUtil.getLong($groupId))) ### we are interested in 2 specific vocabularies - Schools and StudyLevels #foreach($voc in $vocabularies) #if ($voc.name == "poems") #set ($poemsVocabularyId = $voc.vocabularyId) #end #end #if (!$entries.isEmpty()) <div> #foreach ($curEntry in $entries) #set( $picture = "") #set( $renderer = $curEntry.getAssetRenderer() ) #set( $link = $renderer.getURLViewInContext($renderRequest, $renderResponse, '')) #set( $journalArticle = $renderer.getArticle() ) #set ($articlePrimKey = $journalArticle.getResourcePrimKey()) #set ($catLocalService = $serviceLocator.findService("com.liferay.portlet.asset.service.AssetCategoryLocalService")) #set ($catPropertyLocalService = $serviceLocator.findService("com.liferay.portlet.asset.service.AssetCategoryPropertyLocalService")) #set ($articleCats = $catLocalService.getCategories("com.liferay.portlet.journal.model.JournalArticle", $articlePrimKey)) #set($viewURL = $assetPublisherHelper.getAssetViewURL($renderRequest, $renderResponse, $curEntry)) #set( $document = $saxReaderUtil.read($journalArticle.getContent()) ) #set( $rootElement = $document.getRootElement() ) #set( $pictureSelector = $saxReaderUtil.createXPath("dynamic-element[@name='imgPicture']") ) #set( $picture = $pictureSelector.selectSingleNode($rootElement).getStringValue() ) #set ($colorCode = "") ###### get the poem categories ####### #foreach($cat in $articleCats) #if($cat.vocabularyId == $poemsVocabularyId) #set ($colorCode = $catPropertyLocalService.getCategoryProperty($cat.categoryId, "color_code")) #end #end ###### Display the poem as a card ####### <a href="$viewURL" style="color:white;"> <div class="span4" style="background-color: #$colorCode.value; color:white; padding:10px; margin:10px; height:200px; border-radius:10px;"> <span style="font-weight:bold;color:white;font-size:20px;">${curEntry.getTitle($locale)}</span> <hr> #if ($picture != "") <img src="$picture" align="right" width="120px" style="padding:5px; margin:5px;"> #end ${curEntry.getSummary($locale)} </div> </a> #end </div> #end   The only snippet I added for my new ecphrastic content was this: #if ($picture != "") <img src="$picture" align="right" width="120px" style="padding:5px; margin:5px;"> #end   And here is what the Asset Publisher portlet response looks like when that ADT renders. You can see how my Ecphrastic content displays with a thumbnail of the insipirational artwork right-aligned.   

Okay, so why glorious?

Ah, yes!

The Glorious Simplicity of Structure Inheritance.

Two reasons why I chose that as title for this post.

  1. I was thinking about The Curious Case Of Benjamin Button - may have had something to do with it.

  2. The simplicity is glorious because IMO we don’t want any more cleverness than the sharing of common fields to come out of this relationship between structures. Could you imagine the sort of madness that might ensue if an Asset Publisher, for instance, were to display content of a structure that simply extended another structure. What if the extension were only for reuse purposes and that there was no semantic relationship between the structures apart from field names (rare, but possible). The Asset Publisher already does a pretty good job with its querying mechanism giving us various criteria by which to bring back content (multiple structures, tags, categories, fields). We don’t need polymorphic madness injected into the mix.

That concludes it.

Javeed Chida 2016-08-13T22:30:43Z
Categories: CMS, ECM

New themes suite (1/4) - Product APP theme

Liferay - Fri, 08/05/2016 - 06:23

Hi everyone!

 

For a long time themes have been one of the most required features in Liferay, and as we announced during last year, Liferay 7 will be a game changer.


This will be the first out of four different blog entries in which we plan to present our vision for the theming layer moving forward.

 

 

"Real cases"

To begin with, we have developed a set of themes focused on solving real Liferay theme use cases. The main goal is to provide a minimum viable implementation of a real site.

 

We’ve taken inspiration in real Product, Banking and Digital Magazine Sites to distill a useful and inspirational set of themes that can get you started!

 

"Minimal overhead"

With simplicity in mind, we have built awesome themes that add only the necessary overhead. This means that we are enforcing standard pieces all over the sites, minimizing the needs for custom developments.

 

Creating a site based on these themes should give you the enough traction to get started but still staying out of your way, letting you apply your own patterns.  


 

Product APP theme

In today's case we will take a look to new Product APP theme, a theme conceptualized to create an engaging site to showcase and promote a product.

 

As some of the images can hint, this theme includes some cool features such as:

 

  • Six different structure and templates to easily create attractive content for your site

  • Configurable social media links

  • Configurable Fulll Screen Navigation

 

The Product App theme will soon be published in the marketplace.

In the meantime, and while we clear out the last steps of the process, you can give it a try using this [temporary GA2 war version] or build and deploy it yourself using the sources at https://github.com/liferay/liferay-portal/tree/master/modules/apps/foundation/frontend-theme/frontend-theme-product-app

 

Marcos Castro 2016-08-05T11:23:21Z
Categories: CMS, ECM

Dining in the Windy City

Liferay - Thu, 08/04/2016 - 16:36

It’s never too early to plan your next meal–or your next, next, next meal for that matter. While it may seem like Liferay Symposium North America is a while away, it’ll be here before you know it. And when it does, you want to make sure you’re prepared to do one of the most important things a person can do while in Chicago. Eat.

What follows is a list of Chicago’s must-eats. You know, in case you don’t have time to go on a food tour or sift through hundreds of Yelp reviews...

First, what’s served on a poppy seed bun with yellow mustard, relish, chopped onions, tomato wedges, sport peppers, a pickle spear, and a dash of celery salt to top it all off? Hint: ketchup is strictly forbidden when it comes to this Chicagoan trademark.

1. The Chicago Dog

Conveniently located just one block away from the Hilton Chicago, Devil Dawgs offers a variety of Chicago-style dogs that you can enjoy for the same price as your morning latte. And if you don’t mind walking a little bit out into the city, Portillo’s has Italian Beef (another Chicago favorite) on their menu as well. But let’s say you’re looking for something that’ll satisfy your 2 a.m. late night food cravings–worry not! Jim’s is open 24/7. And as for their hot dogs, well,the raving reviews speak for themselves.

2. Deep Dish Pizza

It’d be a shame to leave Chicago without having eaten a real, Chicago-style, deep dish pizza pie. And while pizza joints abound in this city, here are a few fan favorites: 

Lou Malnati’s is hailed as the go-to place to experience deep dish pizza, the Chicago way. But like every classic dish, fans usually take to different sides when it comes to finer details, such as flavor, texture, and in this case, sauce to crust ratio. Giordano’s offers a stuffed crust and has been a strong contender of Lou Malnati’s for a while now. Both are great, so you can’t go wrong with either choice. 

3. Dessert

We all have it–that mysterious, extra stomach for dessert. Which is perfect, considering there are so many places in Chicago that’ll satisfy your sweet tooth. Do-Rite Doughnuts offers an eclectic selection of flavors that’ll leave you wanting to try all the different options. Fans consider these doughnuts to be “definitely worth the indulgence.” But if you’re a more adventurous eater, Black Dog Gelato is the place to visit. You’ll find some pretty interesting flavors: Rosemary Irish Cream, Mexican Hot Chocolate, and even Goat Cheese Cashew Caramel, just to name a few. Wonder what they all taste like? Why not stop by?

Don’t miss out on your chance to attend Liferay Symposium and experience all that the Windy City has to offer! Register now with code LIFERAY250 to receive a $250 discount!

Melanie Chung 2016-08-04T21:36:57Z
Categories: CMS, ECM

What I Learned at an Event for Event Planners Pt. 2

Liferay - Thu, 08/04/2016 - 15:19

Last month, I attended BizBash LA for the second time (you can see my 2015 recap here). I had a great time hearing from industry leaders and meeting fellow event professionals. It’s one of the few times I get to enjoy events from the attendees’ perspective, and it was reaffirming to realize that as planners, we all encounter unpredictable issues at our events, no matter how much we try to avoid them. The difference is in how we react to these situations. And as former White House Social Secretary Jeremy Bernard noted, it’s crucial to take note of lessons learned with every event to improve for the next time.

Here are some takeaways from what I learned:

Customers come first

When planning an event, it’s important to think about the attendee journey. Put yourself in the guest’s shoes and walk through the entire experience, from beginning to end. What type of information do you need pre-event? What’s the first thing you do onsite? Where do you go after registration? Where is the WiFi information posted? How obvious is it where the restrooms are? Be mindful of the little details that make a big difference.

In the events world, ‘customers’ refer not only to attendees, but also to partners, sponsors, and internal stakeholders. It’s important to establish clear set goals upfront and remind yourself about what you’re working towards throughout the planning process. With sponsors, think less about what you can get out of them, and more about what you can offer to them. Sharing the costs means sharing the experience, so make sure you’re creating an experience that is mutually beneficial to you both.

Be authentic

To celebrate its 10th anniversary, Google created a festival-like experience for this year’s Google I/O. They went back to their roots to really embrace the heart and soul of the company. Developers from around the world enjoyed playful activations and celebratory concerts.

As the largest tech conference in the world, Dreamforce is the ultimate expression of Salesforce as a brand. The event includes various summits that showcase local entrepreneurs, customer success stories, and volunteer opportunities.

Ultimately, it comes down to having clear values and staying true to who you are as a company.

We’re in the happiness business

Events are often a celebration of some sort, whether it’s a product launch or a holiday party or a press conference. To produce great events, you need to produce great experiences. And to produce great experiences, you have to know your audience and understand what makes them tick.

At Zappos, the Fungineering team puts on a number of events to foster employee engagement and happiness. There is a strong emphasis on delivering WOW through service and interaction throughout the journey, from wearable invitations to entertaining entryways to cab vouchers at the end of the night. It’s all about taking care of your attendees and making them feel valued.

Last call

Events are the ideal places to share experiences and make connections. And in order to experience, we must feel and connect. So let’s commit to creating environments that allow for meaningful connections to be made.

Join me at Liferay Symposium North America on Sept. 26-27 in Chicago to see how I apply these lessons learned. You can view all upcoming and past Liferay events on www.liferay.com/events.

Angela Wu 2016-08-04T20:19:47Z
Categories: CMS, ECM

Liferay Arquillian test cases using maven and spring

Liferay - Tue, 08/02/2016 - 05:57

Liferay Arquillian test cases using maven and spring

Arquillian integrates seamlessly with familiar testing frameworks (e.g., JUnit, TestNG), allowing tests to be launched using existing IDE, Ant and Maven test plugins.

Arquillian can use three containers types 

1. Embedded: the test runner contains the container as a library

2. Managed: the test runner starts and stops the container as a separated process

3. Remote: the test runner relies on an operational container, already started.

There are a few steps required to setup and run Arquillian for plugin projects.

  • Setup Tomcat for Arquillian
  • Configure Project
  • Write Unit Test Case
  • Code coverage Report
Setup Tomcat for Arquillian

1. Adding Users to Tomcat

Arquillian will deploy the plugin inside the container which is how it has a runtime with which it can locate your dependencies from the test. 

Go to your $TOMCAT_HOME/conf directory and open the tomcat-users.xml file. 

If none exists, create it and add the following to the file  Save and close the file

   <?xml version="1.0"?>

            <tomcat-users>

        <role rolename="tomcat" />

        <role rolename="manager-gui" />

        <role rolename="manager-script" />

        <role rolename="manager-jmx" />

        <role rolename="manager-status" />

        <user password="tomcat" roles="tomcat, manager-gui,

  manager-script ,manager-jmx, manager-status"  

username="tomcat"/>

</tomcat-users>

2. Add Connection for Arquillian

Arquillian is going to use the manager application of Tomcat to deploy the plugin as part of the test execution, So modify environment properties to support it.

              Make sure that tomcat has manager folder in webapps directory.

  • Go to your $TOMCAT_HOME/bin directory and open the setenv.sh (if you are on MAC/Linux) or setenv.bat (if you are on Windows)

  • Add the following to the file.

JMX_OPTS="-Dcom.sun.management.jmxremote  -Dcom.sun.management.jmxremote.authenticate=false  -Dcom.sun.management.jmxremote.port=8099  -Dcom.sun.management.jmxremote.ssl=false"

       CATALINA_OPTS="${CATALINA_OPTS} ${JMX_OPTS}"

You can alter port to use a different value other than 8099 – but this is the "default" value.

Restart your TOMCAT server to make sure the changes are picked up.

 

Configure Project

 1. Open the pom.xml for your project and add below dependencies

<!-- Test Dependencies -->

<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>4.12</version>

</dependency>

<dependency>

<groupId>org.jboss.arquillian.container</groupId>

<artifactId>arquillian-tomcat-remote-7</artifactId>

<version>1.0.0.CR7</version>

<scope>test</scope>

</dependency>

<dependency>

<groupId>org.jboss.shrinkwrap.resolver</groupId>

<artifactId>shrinkwrap-resolver-impl-maven</artifactId>

<scope>test</scope>

</dependency>

<dependency>

<groupId>org.jboss.arquillian.junit</groupId>

<artifactId>arquillian-junit-container</artifactId>

<scope>test</scope>

</dependency>

Note: arquillian-tomcat-remote-7 is for Container type Remote for Managed use arquillian-tomcat-managed-7 and for embedded use arquillian-tomcat-embedded-7 maven dependencies

2. Add the following dependency management node just above your listed dependencies

<dependencyManagement>

<dependencies>

<dependency>

<groupId>org.jboss.arquillian</groupId>

<artifactId>arquillian-bom</artifactId>

<version>1.1.11.Final</version>

<scope>import</scope>

<type>pom</type>

</dependency>

</dependencies>

</dependencyManagement>

3 .Now add a folder called java under the /src/test folder.

4. Add a new folder resources and create a new file called arquillian.xml and place the following contents (adjusting values based on your settings).

  • Arquillian settings for Remote type container :

<?xml version="1.0" encoding="UTF-8"?>

<arquillian xmlns="http://jboss.org/schema/arquillian"

  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

  xsi:schemaLocation="http://jboss.org/schema/arquillian  http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <engine>

        <property name="deploymentExportPath">target/deployments</property>

    </engine>

    <container qualifier="tomcat" default="true">

        <configuration>

            <property name="user">tomcat</property>

            <property name="pass">tomcat</property>

        </configuration>

    </container>    

</arquillian>

  • Arquillian settings for Managed type container 

        <container qualifier="tomcat" default="true">

        <configuration>

            <property name="user">tomcat</property>

            <property name="pass">tomcat</property>

            <property name="catalinaHome">Path of Tomcate folder</property>

            <property name="catalinaBase">Path of Tomcate folder</property>

        </configuration>

    </container>

 

Write Unit Test Case

Click on the /test/java folder and create new test classes  ArquillianDeployment.java and MyArquillianPortletTest.java

  • ArquillianDeployment.java is a generic class which is used to define a deployment type of test cases (ex : war or jar).

package com.arquillian.portlet.test;

import java.io.File;

import org.jboss.shrinkwrap.api.ShrinkWrap;

import org.jboss.shrinkwrap.api.spec.WebArchive;

import org.jboss.shrinkwrap.resolver.api.maven.Maven;

import org.jboss.shrinkwrap.resolver.api.maven.PomEquippedResolveStage;

public class ArquillianDeployment {

private static final String LOCAL_WEBAPP_DIR = "src/main/webapp/WEB-INF";

public static WebArchive createDeployment() {

PomEquippedResolveStage mavenResolver =  Maven.resolver().loadPomFromFile("pom.xml");

File[] libs = mavenResolver.importRuntimeAndTestDependencies().resolve().withTransitivity().asFile();

WebArchive war = ShrinkWrap.create(WebArchive.class, "arquillian-test.war")

.addPackages(true,"com.arquillian.portlet.test","com.arquillian.controller")

.addAsWebInfResource(new File(LOCAL_WEBAPP_DIR, "portlet.xml"))

.addAsWebInfResource(new File(LOCAL_WEBAPP_DIR,    "liferay-plugin-package.properties"))

.addAsWebInfResource(new File(LOCAL_WEBAPP_DIR, "liferay-display.xml"))

.addAsWebInfResource(new File(LOCAL_WEBAPP_DIR, "liferay-portlet.xml"))

.addAsWebInfResource(new File(LOCAL_WEBAPP_DIR+"/spring-context/portlet",    "arquillian-portlet.xml"),"/spring-context/portlet/arquillian-portlet.xml")

.addAsWebInfResource(new File(LOCAL_WEBAPP_DIR+"/spring-context",  "portlet-application-context.xml"), 

"/spring-context/portlet-application-context.xml");

for (File file : libs) { war.addAsLibrary(file);}

return war;

}

}

  • MyArquillianPortletTest.java is Arquillian test class which used to create test methods

package com.arquillian.portlet.test;

import org.jboss.arquillian.container.test.api.Deployment;

import org.jboss.arquillian.junit.Arquillian;

import org.jboss.shrinkwrap.api.spec.WebArchive;

import org.junit.Before;

import org.junit.Test;

import org.junit.runner.RunWith;

import com.liferay.portal.kernel.log.Log;

import com.liferay.portal.kernel.log.LogFactoryUtil;

import com.arquillian.controller.PortletViewController;

@RunWith(Arquillian.class) 

public class MyArquillianPortletTest {

private static Log LOGGER  = 

LogFactoryUtil.getLog(MyArquillianPortletTest.class);

private PortletViewController mvcPortlet;

private long userId=10101;

@Deployment(name="MyArquillian")

public static WebArchive init() {

return ArquillianDeployment.createDeployment();

}

@Before

public void setUp() throws Exception {

mvcPortlet = new PortletViewController();

}

  @Test

    public void testGetUserDetails(){

          mvcPortlet.getUserDetails(userId); 

     }

}

3. Save the file

4. Execute test cases by maven clean test.

 

Code Coverage Report using Jacoco

Jacoco is a free code coverage library for Java. it is very simple to add to all types of build including ANT and Maven.

  • Add Jacoco profile in pom.xml

<profiles>

<profile>

    <id>jacoco</id>

    <dependencies>

        <dependency>

            <groupId>org.jacoco</groupId>

            <artifactId>org.jacoco.core</artifactId>

            <version>0.7.4.201502262128</version>

            <scope>test</scope>

        </dependency>

        <dependency>

            <groupId>org.jboss.arquillian.extension</groupId>

            <artifactId>arquillian-jacoco</artifactId>

            <version>1.0.0.Alpha8</version>

            <scope>test</scope>

        </dependency>

    </dependencies>

    <build>

        <plugins>

            <plugin>

                <groupId>org.jacoco</groupId>

                <artifactId>jacoco-maven-plugin</artifactId>

                <version>0.7.4.201502262128</version>

                <executions>

                    <execution>

                        <goals>

                            <goal>prepare-agent</goal>

                        </goals>

                   </execution>

                    <execution>

                        <id>report</id>

                        <phase>prepare-package</phase>

                        <goals>

                            <goal>report</goal>

                        </goals>

                    </execution>

                </executions>

            </plugin>

        </plugins>

    </build>

</profile>

  • Run unit test cases using maven clean test -Pjacoco jacoco:report

NOTE: 

  • Generated war of test classes will be store in \target\deployments folder of portlet
  • You can see report in \target\site\jacoco folder.

 

 

priti parmar 2016-08-02T10:57:18Z
Categories: CMS, ECM

Every 30 Seconds...

Liferay - Mon, 07/25/2016 - 18:03

Every 30 seconds, someone becomes a victim of human trafficking. It’s a sad reality, and here are some common myths and misconceptions about this crime:

“Human trafficking victims always come from situations of poverty or from small rural villages.” 
Reality: Although poverty can be a factor in human trafficking because it is often an indicator of vulnerability, poverty alone is not a single causal factor or universal indicator of a human trafficking victim. Trafficking victims can come from a range of income levels, and many may come from families with higher socioeconomic status. 

“There must be elements of physical restraint, physical force, or physical bondage when identifying a human trafficking situation.”
Reality: Psychological means of control, such as threats, fraud, or abuse of the legal process, are sufficient elements of the crime. There is a wide spectrum of human trafficking forms of coercion and control. 

“Victims of human trafficking will immediately ask for help or assistance and will self-identify as a victim of a crime.”
Reality: Victims of human trafficking often do not immediately seek help or self-identify themselves due to a variety of factors including lack of trust, self-blame, or specific instructions by the traffickers regarding how to behave when talking to law enforcement or social services. 

At Liferay, we strive to enrich people to reach their full potential to serve others. At every Liferay Symposium, we like to invest in local communities and give back to those in need. This year, we’ve partnered with Rahab’s Daughters, a local Chicago non-profit organization that aims to spread awareness about the underrated problem of human trafficking within the United States. In addition to educating the public on the realities of human trafficking, Rahab’s Daughters rescues women and helps rehabilitate survivors with the hope that one day, they will be reintegrated into society.

Attendees at Liferay Symposium are encouraged to participate in the Stuff-a-Backpack Campaign that will take place throughout the event. Backpacks will be provided to attendees to stuff with toiletries and basic necessities to be given to victims upon their rescue. Additionally, Liferay Foundation will donate $50 to Rahab’s Daughters for each backpack that is stuffed. Together, we can support Rahab’s Daughters in their efforts to locate and rescue trafficked women.

Help trafficked women find a better life – one where they don’t have to live in fear. Participate in this year’s charity activity at Liferay Symposium. If you haven’t signed up yet, register with discount code LIFERAY250 to save $250. See you in Chicago!

Melanie Chung 2016-07-25T23:03:45Z
Categories: CMS, ECM

Reflections on a Solid EVP

Liferay - Mon, 07/25/2016 - 16:55

For my very first EVP trip, my family and I had to opportunity to shadow the day-to-day operations of Solidarity Brasil and ABBA. These organizations share many of the same core values as Liferay to care for children in their city, São Paulo, Brazil. What I did varied from manual labor for a house that will be used as an aftercare center for underage girls victimized by sex-trafficking, to building trust and relationships with street children, to volunteering at a school in a favela, to helping kids with their English at a group home for abused children. Throughout the week, I was able to see how our core values could translate into something very different than open source software - helping children and families.

 

Produce Excellence - “We give our best efforts to get excellent results that stand in the market. But the process is also important: we won't betray our values just to get the job done.”

ABBA and Solid Brasil takes a holistic approach on tackling the needs of children in their city by addressing multiple areas that put kids in danger. They aren’t just reacting to problems they see; they’re strategically addressing the issues before and after they happen by preventing, intervening, educating, and restoring lives. They work to prevent issues by educating families and children in poor, underserved communities. They have shelters, group homes, foster care, counselors and social workers to help families and children when issues do occur. For children in need, they work with them to help them have safe and healthy lives. They reach out to children who live on the streets and children who have been trafficked to try to provide safe environments for them.

Lead by Serving - “Leadership is a calling to serve others and stay humble. Our people lead by example, regardless of position or title.”

They truly lead by serving. From the director to the maintenance man, it is clear that everyone has a real relationship with the kids that they work with. What I saw looked more like a big extended family with many parent-figures modeling a healthy life and caring for the kids rather than an organization appeasing their stakeholders.

Value People - “People are inherently valuable. Therefore, we respect people, invest in relationships, and celebrate one another.”

I don’t know who wrote that one ^ but ditto. The poor, orphaned, neglected, and slaves - these are titles given to... people. If I’m to be honest with myself, I often forget that all people are inherently valuable so it was an amazingly refreshing experience to be able to witness and take part with others whose purpose is to value people (especially those people forgotten by the society at large).

Grow and Get Better - “It's not about being better than someone else, but being better than you were yesterday. We seek to learn and grow from every single experience.”

Some of the staff grew up in very similar situations as the kids they serve and some of the staff grew up in entirely different countries. They might not be the ideal candidates for the job but they are people with heart. And, much like Liferay tries to do, I saw them intentionally invest in the individual to help them grow and get better.

Stay Nerdy - “We enjoy the unique personalities that we have at Liferay. We encourage our people to share their interests, have fun, and be comfortable being themselves.”

From what I saw, the people in the organization had the mobility to serve in areas that interest them. One Canadian expat spearheaded a foster care system in the city of São Paulo. A mom from a favela became a house mother for abused kids. A German-Iranian with broken Portuguese opened up a school and afterschool program. And a family is leading a pilot program to have the first aftercare home for underaged girls in their state. People are given the freedom and support to explore their desires and pursue them.

 

At Liferay, I think it’s worthwhile for all of us to take a look at these core values and think about how we might implement them in our own jobs. How can I produce excellence? How can I lead by serving? How can I value people? How can I grow and get better. And as a junior content strategist, I would like to think about ways I can stay nerdy so I can implement Liferay’s core values.

 

p.s. - To be completely honest, the Employee Volunteer Program was the main benefit that drew me to apply for a job here. I’m so thankful for EVP and I encourage you, all of my colleagues, to take advantage of this awesome opportunity we have. Even if you don’t want to use EVP each year, we still can request a grant to donate to an organization of our liking. (It took me just 2 mins to fill out and I had the check in my hand the next day!)

p.s.s. - Here are some pics for your viewing pleasure:

 

Building a wall at Casa Liberdade (aftercare home for sex trafficking victims).

Casa Elohim, group home for minors affected by neglect or abuse.

Playing games with children on the streets.

“Sai curioso!” -  Playing card games during recess at Boas Novas, a school in a favela.

View looking over São Paulo.

Tchau. Até mais, Brasil!

  Patrick Chung 2016-07-25T21:55:29Z
Categories: CMS, ECM

Audience Targeting 2.0 and LCS Will not be Compatible with Liferay Portal 7 Community

Liferay - Mon, 07/18/2016 - 18:11
Today I’m writing to announce that we are discontinuing support for Liferay Connected Services and Audience Targeting for Liferay Portal 7 Community. The main factor behind the decision was the lack of community adoption for both offerings.    Regarding Audience Targeting, there was low adoption by community members compared with most other Marketplace Plug-Ins. Surprisingly, Audience Targeting actually had more interest from Enterprise Subscribers, as you can see by the number of downloads here:      While our Community is an order of magnitude larger than our Subscriber base, the enterprise version of Audience Targeting as of today has 2567 downloads compared with 1951 downloads for the community version. We take that as a sign that most community members aren’t as interested in the functionality provided by the plug-in, which is usually used for Digital Marketing scenarios.    A similar train of thought holds true for Liferay Connected Services, a cloud-based set of tools for monitoring Liferay systems in production and applying patches and updates. After holding a public beta for over a year, we saw almost 3x uptake by enterprise users vs. community users:      Focusing on a single version of both products will also allow us to evolve Audience Targeting and LCS more quickly into mature products. When targeting and LCS become more feature-rich, we may re-visit the offerings and packaging them in a way that’s attractive to the community.    I’ve made two announcements in a row now that are negative from the Community's perspective and it probably looks like Liferay is moving away from our commitment to open source. But our focused approach to targeting and LCS will also free up resources for us to invest in other things that are more valuable to the community in the short term. We have been thinking about Community a lot here at Liferay and I’ll soon be announcing some of the ways we want to invest more in all of you and improve your overall experience with Liferay, so stay tuned.    Bryan Cheung 2016-07-18T23:11:27Z
Categories: CMS, ECM

OSGi Module Patching Guide

Liferay - Thu, 07/14/2016 - 14:43

I wanted to share some testing that I have been doing in terms of patching OSGi modules and adding the patched modules to your own Liferay installations.  One of the many benefits to modularity is that it allows for these kinds of changes to be made without having to recompile the whole platform from scratch.

Update: This procedure has been updated to make use of a new feature in Liferay Portal 7 CE GA3 which allows updated modules to override the original module without modifying the original .lpkg file which is much simpler. For more details see the README.

Disclamer: This procedure is only meant for use by Liferay 7 CE.  Customers using Liferay DXP should continue to using the patching instructions provided in the offical documentation.

Overview

The example I would like to use is patching the JSP compiler module within Liferay 7 GA3.

There is a few steps that we need to take in order to compile a patched version of the module and then add it to your system.  These steps can be used for patching pretty much any module in Liferay.  But keep in mind if you run into any issues with your installation its advisable to remove all patched modules and try to recreate the issue using a clean installation of Liferay before you report the issue.

The process for patching a module is really broke down into two main areas: Compiling the custom module and adding the custom module to your Liferay installation.

Compiling the custom module

In this section we will clone the repository where the module resides and compile the module.  In the case of this example, the fix has already been committed to the Github repository so no code changes are necessary.  We do need to modify the version of the module to create a snapshot version of the module we are replacing in Liferay 7.   Clone and Compile the module:
  1. Clone the com-liferay-portal-osgi-web repository from github: git clone https://github.com/liferay/com-liferay-portal-osgi-web.
  2. From within the bnd.bnd file in portal-osgi-web-servlet-jsp-compiler set the version to 2.0.7.IDENTIFIER where IDENTIFIER  is whatever you want it to be for example: 2.0.7.PATCH-1.
  3. Compile the module by running the following from within the portal-osgi-web-servlet-jsp-compiler directory: gradle assemble.
  4. This will create a new version of the bundle called com.liferay.portal.osgi.web.servlet.jsp.compiler-2.0.7.PATCH-1.jar in: portal-osgi-web-servlet-jsp-compiler/build/libs
  5. Rename com.liferay.portal.osgi.web.servlet.jsp.compiler-2.0.7.PATCH-1.jar to: com.liferay.portal.osgi.web.servlet.jsp.compiler.jar removing the version identifier
Deploying the custom module to Liferay With Liferay 7 all of the out of the box modules are provided inside Liferay LPKG files which are zip files.  In order to override an out of the box module, a patched version of the module should be copied to an override directory in osgi/marketplace directory. If the module being overridden is located inside of Liferay CE Static.lpkg it will be to be copied inside of osgi/static instead.  Liferay should not be running when performing these steps.   Replacing the original module:
  1. Copy the patched module into osgi/static/override since the original module resides inside of the Liferay CE Static.lpkg file.
  2. Start Liferay
  3. Once Liferay is started login to the Gogo shell with telnet localhost 11311 and type: lb | grep 2.0.7.PATCH-1.  This should show that your patched module is deployed and running.
Jamie Sammons 2016-07-14T19:43:00Z
Categories: CMS, ECM

Liferay 7, Service Builder and External Databases

Liferay - Wed, 07/13/2016 - 22:51

So I'm a long-time supporter of ServiceBuilder.  I saw its purpose way back on Liferay 4 and 5 and have championed it in the forums and here in my blog.

With the release of Liferay 7, ServiceBuilder has undergone a few changes mostly related to the OSGi modularization.  ServiceBuilder will now create two modules, one API module (comparable to the old service jar w/ the interfaces) and a service module (comparable to the service implementation that used to be part of a portlet).

But at it's core, it still does a lot of the same things.  The service.xml file defines all of the entities, you "buildService" (in gradle speak) to rebuild the generated code, consumers still use the API module and your implementation is encapsualted in the service module.  The generated code and the Liferay ServiceBuilder framework are built on top of Hibernate so all of the same Spring and Hibernate facets still apply.  All of the features used in the past are also supported, including custom SQL, DynamicQuery, custom Finders and even External Database support.

External Database support is still included for ServiceBuilder, but there are some restrictions and setup requirements that are necessary to make them work under Liferay 7.

Examples are a good way to work through the process, so I'm going to present a simple ServiceBuilder component that will be tracking logins in an HSQL database separate from the Liferay database.  That last part is obviously contrived since one would not want to go to HSQL for anything real, but you're free to substitute any supported DB for the platform you're targeting.

The Project

So I'll be using Gradle, JDK 1.8 and Liferay CE 7 GA2 for the project.  Here's the command to create the project:

blade create -t servicebuilder -p com.liferay.example.servicebuilder.extdb sb-extdb

This will create a ServiceBuilder project with two modules:

  • sb-extdb-api: The API module that consumers will depend on.
  • sb-extdb-service: The service implementation module.
The Entity

So the first thing we need to define is our entity.  The service.xml file is in the sb-extdb-service module, and here's what we'll start with:

<?xml version="1.0"?> <!DOCTYPE service-builder PUBLIC "-//Liferay//DTD Service Builder 7.0.0//EN" "http://www.liferay.com/dtd/liferay-service-builder_7_0_0.dtd"> <service-builder package-path="com.liferay.example.servicebuilder.extdb"> <!-- Define a namespace for our example --> <namespace>ExtDB</namespace> <!-- Define an entity for tracking login information. --> <entity name="UserLogin" uuid="false" local-service="true" remote-service="false" data-source="extDataSource" > <!-- session-factory="extSessionFactory" tx-manager="extTransactionManager" --> <!-- userId is our primary key. --> <column name="userId" type="long" primary="true" /> <!-- We'll track the date of last login --> <column name="lastLogin" type="Date" /> <!-- We'll track the total number of individual logins for the user --> <column name="totalLogins" type="long" /> <!-- Let's also track the longest time between logins --> <column name="longestTimeBetweenLogins" type="long" /> <!-- And we'll also track the shortest time between logins --> <column name="shortestTimeBetweenLogins" type="long" /> </entity> </service-builder>

This is a pretty simple entity for tracking user logins.  The user id will be the primary key and we'll track dates, times between logins as well as the user's total logins.

Just as in previous versions of Liferay, we must specify the external data source for our entity/entities.

ServiceBuilder will create and manage tables only for the Liferay DataBase.  ServiceBuilder will not manage the tables, indexes, etc. for any external databases.

In our particular example we're going to be wiring up to HSQL, so I've taken the steps to create the HSQL script file with the table definition as:

CREATE MEMORY TABLE PUBLIC.EXTDB_USERLOGIN( USERID BIGINT NOT NULL PRIMARY KEY, LASTLOGIN TIMESTAMP, TOTALLOGINS BIGINT, LONGESTTIMEBETWEENLOGINS BIGINT, SHORTESTTIMEBETWEENLOGINS BIGINT); The Service

The next thing we need to do is build the services.  In the sb-extdb-service directory, we'll need to build the services:

gradle buildService

Eventually we're going to build out our post login hook to manage this tracking, so we can guess that we could use a method to simplify the login tracking.  Here's the method that we'll add to UserLoginLocalServiceImpl.java:

public class UserLoginLocalServiceImpl extends UserLoginLocalServiceBaseImpl { private static final Log logger = LogFactoryUtil.getLog(UserLoginLocalServiceImpl.class); /** * updateUserLogin: Updates the user login record with the given info. * @param userId User who logged in. * @param loginDate Date when the user logged in. */ public void updateUserLogin(final long userId, final Date loginDate) { UserLogin login; // first try to get the existing record for the user login = fetchUserLogin(userId); if (login == null) { // user has never logged in before, need a new record if (logger.isDebugEnabled()) logger.debug("User " + userId + " has never logged in before."); // create a new record login = createUserLogin(userId); // update the login date login.setLastLogin(loginDate); // initialize the values login.setTotalLogins(1); login.setShortestTimeBetweenLogins(Long.MAX_VALUE); login.setLongestTimeBetweenLogins(0); // add the login addUserLogin(login); } else { // user has logged in before, just need to update record. // increment the logins count login.setTotalLogins(login.getTotalLogins() + 1); // determine the duration time between the current and last login long duration = loginDate.getTime() - login.getLastLogin().getTime(); // if this duration is longer than last, update the longest duration. if (duration > login.getLongestTimeBetweenLogins()) { login.setLongestTimeBetweenLogins(duration); } // if this duration is shorter than last, update the shortest duration. if (duration < login.getShortestTimeBetweenLogins()) { login.setShortestTimeBetweenLogins(duration); } // update the last login timestamp login.setLastLogin(loginDate); // update the record updateUserLogin(login); } } }

After adding the method, we'll need to build services again for the method to get into the API.

Defining The Data Source Beans

So we now need to define our data source beans for the external data source.  We'll create an XML file, ext-db-spring.xml, in the sb-extdb-service/src/main/resources/META-INF/spring directory.  When our module is loaded, the Spring files in this directory will get processed automatically into the module's Spring context.

<?xml version="1.0"?> <beans default-destroy-method="destroy" default-init-method="afterPropertiesSet" xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd" > <!-- NOTE: Current restriction in LR7's handling of external data sources requires us to redefine the liferayDataSource bean in our spring configuration. The following beans define a new liferayDataSource based on the jdbc.ext. prefix in portal-ext.properties. --> <bean class="com.liferay.portal.dao.jdbc.spring.DataSourceFactoryBean" id="liferayDataSourceImpl"> <property name="propertyPrefix" value="jdbc.ext." /> </bean> <bean class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy" id="liferayDataSource"> <property name="targetDataSource" ref="liferayDataSourceImpl" /> </bean> <!-- So our entities are all appropriately tagged with the extDataSource, we'll alias the above liferayDataSource so it matches the entities. --> <alias alias="extDataSource" name="liferayDataSource" /> </beans>

These bean definitions are a big departure from the classic way of using an external data source.  Previously we would define separate data source beans from the Liferay Data Source beans, but under Liferay 7 we must redefine the Liferay Data Source to point at our external data source.

This has a couple of important side effects:

Only one data source can be used in a single ServiceBuilder module.  If you have three different external data sources, you must create three different ServiceBuilder modules, one for each data source. The normal Liferay transaction management limits the scope of transactions to the current module.  To manage transactions that cross ServiceBuilder modules, you must define and use XA transactions.

The last line, the alias line, this line defines a Spring alias for the liferayDataSource as your named data source in your service.xml file.

So, back to our example.  We're planning on writing our records into HSQL, so we need to add the properties to the portal-ext.properties for our external datasource connection:

# Connection details for the HSQL database jdbc.ext.driverClassName=org.hsqldb.jdbc.JDBCDriver jdbc.ext.url=jdbc:hsqldb:${liferay.home}/data/hypersonic/logins;hsqldb.write_delay=false jdbc.ext.username=sa jdbc.ext.password= The Post Login Hook

So we'll use blade to create the post login hook.  In the sb-extdb main directory, run blade to create the module:

blade create -p com.liferay.example.servicebuilder.extdb.event -t service -s com.liferay.portal.kernel.events.LifecycleAction sb-extdb-postlogin

Since blade doesn't know we're really adding a sub module, it has created a full standalone gradle project.  While not shown here, I modified a number of the gradle project files to make the postlogin module a submodule of the project.

We'll create the com.liferay.example.servicebuilder.extdb.event.UserLoginTrackerAction with the following details:

/** * class UserLoginTrackerAction: This is the post login hook to track user logins. * * @author dnebinger */ @Component( immediate = true, property = {"key=login.events.post"}, service = LifecycleAction.class ) public class UserLoginTrackerAction implements LifecycleAction { private static final Log logger = LogFactoryUtil.getLog(UserLoginTrackerAction.class); /** * processLifecycleEvent: Invoked when the registered event is triggered. * @param lifecycleEvent * @throws ActionException */ @Override public void processLifecycleEvent(LifecycleEvent lifecycleEvent) throws ActionException { // okay, we need the user login for the event User user = null; try { user = PortalUtil.getUser(lifecycleEvent.getRequest()); } catch (PortalException e) { logger.error("Error accessing login user: " + e.getMessage(), e); } if (user == null) { logger.warn("Could not find the logged in user, nothing to track."); return; } // we have the user, let's invoke the service getService().updateUserLogin(user.getUserId(), new Date()); // alternatively we could just use the local service util: // UserLoginLocalServiceUtil.updateUserLogin(user.getUserId(), new Date()); } /** * getService: Returns the user tracker service instance. * @return UserLoginLocalService The instance to use. */ public UserLoginLocalService getService() { return _serviceTracker.getService(); } // use the OSGi service tracker to get an instance of the service when available. private ServiceTracker<UserLoginLocalService, UserLoginLocalService> _serviceTracker = ServiceTrackerFactory.open(UserLoginLocalService.class); } Checkpoint: Testing

At this point we should be able to build and deploy the api module, the service module and the post login hook module.  We'll use the gradle command:

gradle build

In each of the submodules you'll find a build/libs directory where the bundle jars are.  Fire up your version of LR7CEGA2 (make sure the jdbc.ext properties are in portal-ext.properties file before starting) and put the jars in the $LIFERAY_HOME/deploy folder.  Liferay will pick them up and deploy them.

Drop into the gogo shell and check your modules to ensure they are started.

Log into the portal a few times and you should be able to find the database in the data directory and browse the records to see what it contains.

Conclusion

Using external data sources with Liferay 7's ServiceBuilder is still supported.  It's still a great tool for building a db-based OSGi module, still allows you to generate a bulk of the DB access code while encapsulating behind an API in a controlled manner.

We reviewed the new constraints on ServiceBuilder imposed by Liferay 7:

  • Only one (external) data source per Service Builder module.
  • The external data source objects, the tables, indexes, etc., must be manually managed.
  • For a transaction to span multiple Service Builder modules, XA transactions must be used.

You can find the GitHub project code for this blog here: https://github.com/dnebing/sb-extdb

David H Nebinger 2016-07-14T03:51:15Z
Categories: CMS, ECM

Mobile First with Liferay Screens

Liferay - Wed, 07/13/2016 - 15:26

 

Mobile First with Liferay Screens

 

Mobile has changed the world and will go on doing so for the foreseeable future. A stream of new mobile devices and operating systems is putting pressure on organizations to meet the high expectations of users, who expect a mobile digital experience that also matches their personal behavior, history and profile. Keeping up is not enough now mobile has become the dominant communication platform. With Liferay Screens, organizations can give themselves a head start. And keep it.

 

Digital experience
Open a site and view static content? This is not what modern users expect at all. Users are accustomed to literally being able to shape their digital world themselves. If they show interest in a subject, or their profile matches a particular group, then they expect content and functionality to be geared to this. Users count on the right access at the right moment, for example to relevant information, events and interesting downloads. In short: they want a personalized and consistent digital experience – on every device and in every application.

 

 

Native mobile app development tool
The Liferay Digital Experience Platform has been designed to achieve this digital experience. One component is the Liferay Screens mobile app development tool, which makes it possible to present functionality and data from the Liferay platform in a native mobile app. This means that we can directly make use of standard Liferay facilities, such as login or search, from an iOS app or an Android app. And that a form filled in and sent via the mobile app can also be found on the portal and vice versa.

Smartphone, laptop or PC – in this way, the content for all devices and applications is provided by the same Liferay platform. The result is the coherent and uniform user experience that today’s users demand.

 

Why native?
Can we also create a digital experience of this sort by means of a responsive site, for example? The answer is: yes. But native development is the only way to optimally satisfy the requirements of app users. This is a question of speed, and also, primarily, of user-friendliness. A mobile site or HTML5 or hybrid app just has a different 'feel' than a native app.

Unlike a hybrid or HTML5 app, a native app also gives full access to all components of the device. Interaction with other apps on the device is possible as well.

On a responsive site, navigation can be a tricky issue. The most relevant information is not always presented primarily. Liferay Screens works with a single source and a single administrator, just as a responsive site does, but the presentation can be geared for 100% to the device and the wishes of the user.

 

How Liferay Screens works
Liferay Screens works with reusable components, screenlets, which use the Liferay content and web services for native mobile apps. Screenlets can be compared with the plug-ins of the Liferay platform; they contain the functionality necessary for, for example, a list or detailed display of content from the CMS. This functionality is formed by four elements: View, Theme, Connector and Mobile SDK. By combining screenlets, we can bring together different functionalities in one flow, for example displaying content and entering comments or ratings.

 

 

The theming is reusable, too, and also simple to modify. This makes the development of follow-up native mobile apps based on Liferay Screens considerably faster and easier.

With Liferay Screens, we can provide access to information both from the Liferay platform and from other enterprise systems, and combine this in the app. The tool is open source and fully compatible with Android Studio and X-code. Naturally, the connection with Liferay gives a development advantage to Liferay developers.

 

More advantages of Liferay Screens

  • Optimum security The authentication in the app matches that of the platform. So users who access the data provided from the Liferay environment are always known.
  • Liferay permission system is available. The roles and permissions are the same for app and website.
  • Extensive possibilities for branding The Liferay Screens app gives complete control of the interface and the device. This makes branding easy to implement, both internally and externally.
  • Content available offline Users can obtain access offline to the information they need in their work.
  • Out-of-the-box connection to the Liferay server.
  • Standardized apps architecture.

 

 

Componence gives organizations a head start
Liferay Screens is an indispensable link in every mobile strategy based on Liferay. In combination with the standard functionalities of Liferay and suitable implementation of the platform, this tool makes it possible to provide a consistent inter-device user journey: from the desktop and the portal to the smartphone and the mobile app.

The strength of Liferay Screens is the reusability of components. Componence has already developed large numbers of these screenlets, with which we can completely eliminate the generally higher investment on native development. Our screenlets are at the service of organizations looking for a head start with native mobile apps that satisfy all the requirements and wishes of modern users.

 

View our native mobile apps
Would you like to find out more about Liferay Screens and about the app that Componence recently developed for healthcare provider Philadelphia? Our next blog will appear very soon on Liferay.com. It will focus on Phlink, our native mobile app based on Liferay Screens.

Interested? Then watch our video now and mark the afternoon of 13 October in your diary!

 

Maarten van Heiningen 2016-07-13T20:26:32Z
Categories: CMS, ECM

OSGi Module Dependencies

Liferay - Wed, 07/06/2016 - 22:55

It's going to happen.  At some point in your LR7 development, you're going to build a module which has runtime dependencies.  How do you satisfy those dependencies though?

In this brief blog entry I'll cover some of the options available...

So let's say you have a module which depends upon iText (and it's dependencies).  It doesn't really matter what your module is doing, but you have this dependency and now have to figure out how to satisfy it.

Option 1 - Make Them Global

This is probably the easiest option but is also probably the worst.  Any jars that are in the global class loader (Tomcat's lib and lib/ext, for example), are classes that can be accessed anywhere, including within the Liferay OSGi container.

But global jars have the typical global problems.  Not only do they need to be global, but all of their dependencies must also be global.  Also global classes are the only versions available, you can't really vary them to allow different consumers to leverage different versions.

Option 2 - Let OSGi Handle Them

This is the second easiest option, but it's likely to not work.  If you declare a runtime dependency in your module and if OSGi has a bundle that satisfies the dependency, it will be automatically available to your module.

This will work when you know the dependency can be satisfied, either because you're leveraging something the portal provides or you've actually deployed the dependency into the OSGi container (some jars also conveniently include OSGi bundle information and can be deployed directly into the container).

For our example, however, it is unlikely that iText will have already been deployed into OSGi as a module, so relying on OSGi to inject it may not end well.

Declaring the runtime dependency is going to be handled in your build.gradle file.  Here's a snippet for the iText runtime dependency:

runtime group: 'com.iowagie', name: 'itext', version: '1.4.8'

If iText (and it's dependencies) have been successfully deployed as an OSGi bundle, your runtime declaration will ensure it is available to your module.  If iText is not available, your module will not start and will report unsatisfied dependencies.

Option 3 - Make An Uber Module

Just like uber jars, uber modules will have all of the dependent classes exploded out of their original jars and are available within the module jar.

This is actually quite easy to do using Gradle and BND.

In your build.gradle file, you should declare your runtime dependencies just as you did for Option 2.

To make the uber module, you also need to include the resources in your bnd.bnd file:

Include-Resource: @itext-1.4.8.jar

So here you include the name of the dependent jar, usually you can see what it is when Gradle is downloading the dependency or by browsing your maven repository.

Note that you must also include any dependent jars in your include statement.  For example, iText 2.0.8 has dependencies on BouncyCastle mail and prov, so those would need to be added:

Include-Resource: @itext-2.0.8.jar,@bcmail-138.jar,@bcprov-138.jar

You may need to add these as runtime dependencies so Gradle will have them available for inclusion.

If you use a zip tool to crack open your module jar, you'll see that all of the individual jars have been exploded and all classes are in the jar.

Option 4 - Include the Jars in the Module

The last option is to include the jars in the module itself, not as an uber module, but just containing the jar files within the module jar.

Similar to option 2 and 3, you will declare your runtime dependencies in the build.gradle file.

The bulk of the work is going to be done in the bnd.bnd file.

First you need to define the Bundle-ClassPath attribute to include classes in the module jar but also the extra dependency jars.  In the example below, I'm indicating that my iText jar will be in a lib directory within the module jar:

Bundle-ClassPath:\ .,\ lib/itext.jar

Rather than use the Include-Resource header, we're going to use the -includeresource directive to pull the jars into the bundle:

-includeresource:\ lib/itext.jar=itext-1.4.8.jar

In this format we're saying that lib/itext.jar will be pulled in from itext-1.4.8.jar (which is one of our runtime dependencies so Gradle will have it available for the build).

This format also supports the use of wildcards so you can leave version selection to the build.gradle file.  Here's an example for including any version of commons-lang:

-includeresource:\ lib/itext.jar=itext-1.4.8.jar,\ lib/commons-lang.jar=commons-lang=[0-9]*.jar

If you use a zip tool to crack open your module jar, you'll find there are jars now in the bundle under the lib directory.

Conclusion

So which of these options should you choose?  As with all things Liferay, it depends.

The global option is easy as long as you don't need different versions of jars but have a lot of dependencies on the jar.  For example, if you had 20 different modules all dependent upon iText 1.4.8, global may be the best path with regards to runtime resource consumption.

Option 2 can be an easy solution if the dependent jar is also an OSGi bundle.  In this case you can allow for multiple versions and don't have to worry about bnd file editing.

Option 3 and 4 are going to be the most common route to choose however.  In both of these cases your dependencies are included within the module so the OSGi's class loader is not polluted with different versions of dependent jars.  They are also environment-agnostic; since the modules contain all of their dependencies, the environment does not need to be prepared prior to module deployment.

Personally I stick with Option 4 - uber jars will tend to step on each other when expanding the jars that contain a same path/file in each (usually xml or config info).  Option 4 doesn't suffer from these sorts of issues.

Enjoy!

David H Nebinger 2016-07-07T03:55:21Z
Categories: CMS, ECM

New Maven Archetypes for JSF Portlets

Liferay - Wed, 07/06/2016 - 14:45

The Liferay Faces team is working on production support for JSF portlets in Liferay Portal 7.0 and Liferay DXP. As part of this effort, we have developed some archetypes for use with Maven 3.

Note: At this time, the archetypes (and associated dependencies like Liferay Faces Bridge) are in SNAPSHOT status.

In order to utilize the archetypes, create a file in $HOME/.m2/settings.xml that contains the following:

<?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <profiles> <profile> <id>liferay-faces-snapshots</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>liferay-faces-snapshots</id> <url>https://oss.sonatype.org/content/repositories/snapshots</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> </profile> </profiles> </settings> For a plain JSF portlet, type the following at the command line: mvn archetype:generate \ -DgroupId=com.mycompany \ -DartifactId=com.mycompany.my.jsf.portlet \ -DarchetypeGroupId=com.liferay.faces.archetype \ -DarchetypeArtifactId=com.liferay.faces.archetype.jsf.portlet \ -DarchetypeVersion=5.0.0-SNAPSHOT \ -DinteractiveMode=false For a PrimeFaces portlet, type the following: mvn archetype:generate \ -DgroupId=com.mycompany \ -DartifactId=com.mycompany.my.primefaces.portlet \ -DarchetypeGroupId=com.liferay.faces.archetype \ -DarchetypeArtifactId=com.liferay.faces.archetype.primefaces.portlet \ -DarchetypeVersion=5.0.0-SNAPSHOT \ -DinteractiveMode=false For a Liferay Faces Alloy portlet, type the following: mvn archetype:generate \ -DgroupId=com.mycompany \ -DartifactId=com.mycompany.my.alloy.portlet \ -DarchetypeGroupId=com.liferay.faces.archetype \ -DarchetypeArtifactId=com.liferay.faces.archetype.alloy.portlet \ -DarchetypeVersion=5.0.0-SNAPSHOT \ -DinteractiveMode=false For an ICEfaces portlet, type the following: mvn archetype:generate \ -DgroupId=com.mycompany \ -DartifactId=com.mycompany.my.icefaces.portlet \ -DarchetypeGroupId=com.liferay.faces.archetype \ -DarchetypeArtifactId=com.liferay.faces.archetype.icefaces.portlet \ -DarchetypeVersion=5.0.0-SNAPSHOT \ -DinteractiveMode=false For a RichFaces portlet, type the following: mvn archetype:generate \ -DgroupId=com.mycompany \ -DartifactId=com.mycompany.my.richfaces.portlet \ -DarchetypeGroupId=com.liferay.faces.archetype \ -DarchetypeArtifactId=com.liferay.faces.archetype.richfaces.portlet \ -DarchetypeVersion=5.0.0-SNAPSHOT \ -DinteractiveMode=false Liferay IDE / Liferay Developer Studio If you are developing your JSF portlets with Eclipse for Java EE Developers, you can install the Liferay IDE plugins. Otherwise, if you are developing with Liferay Developer Studio, the plugins are installed by default. After your project is created with the mvn archetype:generate command, you can import the project into Eclipse using the following steps: 1. Click File -> Import ... 2. Expand the "Maven" category 3. Click on "Existing Maven Projects" and click Next
4. Enter the full directory path to your newly created project 5. Click Finish   In order to deploy the portlet, simply drag the project to the "Liferay 7" or "Liferay DXP" server in the Servers pane.   Neil Griffin 2016-07-06T19:45:13Z
Categories: CMS, ECM

The State of Maven Development in Liferay

Liferay - Tue, 07/05/2016 - 19:30

There's been some confusion over state of Maven support for Liferay. As the Product Manager for Developer Tools, I feel responsible for the confusion and felt the need to clear up any questions.

Will Liferay support Maven development for 6.1, 6.2, and 7.0?

Short answer, yes, most definitely!

In 6.1 and 6.2, we’ve been provided support through liferay-maven-support plugin. Support for this plugin will continue as long as we have users on 6.1 and 6.2. It is now in maintenence mode and will only receive critical bug fixes. We will be releasing fixes to 6.2.6-SNAPSHOT till the next release.

Maven Support in Liferay 7.0 is currently in a state of transition. I believe this is the cause of much of the confusion. Liferay 7.0 will no longer use liferay-maven-support plugin. In 7.0, different tasks will require different plugins. Service Builder has its own plugin; building SASS has its own plugin; etc.

Why did we decide to go this route?

The reason we decided to go in a different direction is because much of our tooling had previously resided inside portal-impl.jar. In 7.0, many of the tools were pulled out and live on their own. This gave us a new opportunity for a fresh start. We decided to turn each of these tools into its own Maven plugin. This means anytime there is a new release for any tooling, a new plugin will be released as they are bundled together.

It also allows us to decouple the archetypes from the plugin. Previously they were released together and it caused some issues when we wanted to release one without the other.

What are your future plans for Maven?

As I said for the maven-support-plugin is now in bug fix mode. No more new features will be added. We will continue fixing bugs. We also will not be releasing any new archetypes.

In 7.0, there are still some features missing that were available in maven-support-plugin. We are currently working on theme building. Archetypes are also being worked on right now. We are also working on a simple maven developer guide.

What is the current status? Is it possible to use Maven for a new project with Liferay 7?

For 6.1 and 6.2, nothing has changed. Continue to use the liferay-maven-support plugin.

For 7.0, you can absolutely use Maven to start your next project. If you want to see samples of the new plugins in use take a look at Maven Blade Samples. Archetypes aren’t ready yet so use the samples if you need a starting point. The only thing you can not do yet is to create a theme using only maven. We recommend you take a look at the new theme tools that leverage node.js if you can. It provides much more than a purely maven solution could ever provide.

Is there any documentation?

Not yet. As mentioned before, we are working on a Maven development guide. For now, I recommend you use Maven Blade Samples as a guide.

David Truong 2016-07-06T00:30:38Z
Categories: CMS, ECM

Searching for JPA entities in Liferay

Liferay - Thu, 06/30/2016 - 15:32

So you want to search for your custom JPA entities in Liferay? Quite some documentation is already available but it is somewhat scattered over several blogposts and articles. With this blogpost I would like to give you a summary of everything you need to do, or more precisely everything we did, to make custom JPA entities searchable in the standard Liferay Search Portlet.

The requirement

One of our customers wanted the search results to not only contain the default Liferay objects like articles, documents, etc... But also some of his own custom domain objects. We had created a custom lightweight Support Ticket framework with a set of limited functionalities. It allowed a user to create a ticket and post comments to it. The additional requirement was that a user had to be able to also search for these tickets based on title, description and... oh yeah the comments too. In this blog I will use a very simplified version of this domain that shows all the steps necessary to make a JPA entity searchable. The domain only has 1 JPA entity, SupportTicket, together with a repository and service. There is also a simple portlet that can be used to create and show instances of this entity. So let's go on a journey, a quest you might even call it, to enable the search functionality for your custom JPA entities. Eventually you will get to the end point where you will use the Liferay Search Portlet to search and find your own JPA entities. You can also create your own custom search (taking customization to the max!) but that is outside the scope of this blog post. In the resources section however you can find links for more information on this matter. In a vanilla Liferay 6.2 the result should look like this:

Enter the Indexer

Liferay’s search & indexing functionality is provided by Apache Lucene, a Java search framework. It converts searchable entities into documents. Documents are custom objects that correspond to searchable entities. Usually performing full text searches against an index is much faster than searching for entities in a database. You can find more information about Lucene on its own website. To start you will need to create an indexer class. This class will hold the information required to transform your JPA entity into a Lucene Document. Next you will need to register this Indexer in the portlet so that the Liferay framework knows about it and can use it when necessary.

Creating the Indexer

I already made clear that the Indexer class is responsible for creating the Lucene documents for your JPA entity. It is important to know that this is where you can decide what fields of your entity will get indexed in order to be searchable. You can also specify if this needs to be a term or a phrase. And of course there are some other settings you can implement such as the name of the associated portlet, permissions, ... Have a look at the Sources for blog posts with more information. Liferay provides you with a default implementation named BaseIndexer as an easy start. This abstract class should be extended and the required methods implemented. Enough of the theoretical stuff, let's get started. As soon as you extend the BaseIndexer, you will need to overwrite the following methods. I'm not going to go into the details of all the implementations, I refer you to the Github project. Mostly the implementation will depend on your own requirements and domain.

  • doDelete: delete the document that corresponds to the object parameter
  • doGetDocument: specify which fields the Lucene Index will contain
  • doGetSummary: used to show a summary of the document in the search results
  • doReindex (3 times): will be called when the index is updated
  • getPortletId (2 times): to be able to narrow searches for documents created in a certain portlet instance (hence it needs to be the full portlet Id)
  • getClassNames: the FQCN of the entities this indexer can be used for

Something important to notice in this class is the difference between document.addKeyword and document.addText in the doGetDocument method.

  • addKeyword: adds a single-valued field, with only one term
  • addText: adds a multi-valued field, with multiple terms separated by a white space

The doGetDocument method is the most important piece of code in the TicketIndexer. In this method an incoming Object is transformed into an outgoing (Lucene) Document. The object is of type Ticket as the Indexer has been registered to be used for objects of that type. This registration occurs in the getClassNames method. A new Document object holding the necessary searchable fields can be created from this Ticket. Notice that all the Keyword/Text fields are already predefined in Liferay so you can use these constants. This is quite handy but you can always provide your own names.

@Override protected Document doGetDocument(Object obj) throws Exception { Ticket ticket = (Ticket) obj; List<TicketComment> comments = BeanLocator.getBean(TicketCommentService.class).getComments(ticket); Document document = new DocumentImpl(); document.addUID(PORTLET_ID, ticket.getId()); document.addKeyword(Field.COMPANY_ID, PortalUtil.getDefaultCompanyId()); document.addKeyword(Field.ENTRY_CLASS_NAME, TICKET_CLASS_NAME); document.addKeyword(Field.ENTRY_CLASS_PK, ticket.getId()); document.addKeyword(Field.PORTLET_ID, PORTLET_ID); document.addText(Field.TITLE, ticket.getSubject()); document.addText(Field.CONTENT, ticket.getDescription()); document.addText(Field.COMMENTS, Collections2.transform(comments, getComments()).toArray(new String[] {})); return document;  } Registering the Indexer

Once the Indexer is created, you will need to register it in the Liferay platform. This is a very simple action and it is completed by adding the following line into your portlet’s liferay-portlet.xml (check the DTD for the exact location).

<indexer-class>FQCN of your custom Indexer</indexer-class>

Once your portlet is redeployed Liferay will automatically register your indexer into its framework.

Tip Just by registering your indexer, existing entities won’t be indexed. As a Liferay admin however, you can trigger a reindex from the Control Panel. Just go to Configuration > Server Administration > Resources (tab) and locate the Execute button next to the Reindex all search indexes entry. Be advised that if you already have a large index (due to web content, documents, …) this may take a while. Info You can use some index inspection tool like Luke to inspect and search through your index. If your system contains at least one instance of your custom JPA entity you should see those pop up in Luke after doing a reindex, which means your Indexer implementation did its job. Info If you are eager to know whether or not your current progress has had any impact, you can already update the Liferay Search Portlet as described in Executing the Search. Using the indexer programmatically

No doubt it is great that your existing entities are now indexed. But you don’t want to trigger this indexing yourself manually, do you? By all means you don’t, you want it to happen automagically. Or at least programmatically. That is why you are reading this blog post anyway. So how about actually using the indexer yourself in the creation of a support ticket? Actually you want to use your newly created Indexer at the right time. But what is the right time? Well, right after a new entity is created. Or updated. Or even deleted. So you will need to use your Indexer at these moments, using the code below. We used the nullSafeGetIndexer as it returns a dummy indexer when no indexer can be found for the class, contrary to the default getIndexer() which returns null.

Indexer indexer = IndexerRegistryUtil.nullSafeGetIndexer(Ticket.class.getName()); indexer.reindex(ticket);

This code instructs your indexer, which you've registered with the FQCN of your entity, to reindex any added or updated entities. It delegates to the doReindex methods you have implemented in your own Indexer. When an entity is deleted its corresponding document should not show up in any search results anymore. You will need to call the code below when you are deleting your entity.

Indexer indexer = IndexerRegistryUtil.nullSafeGetIndexer(Ticket.class.getName()); indexer.delete(ticket);

Redeploying your portlet should do the trick of having each new entity being registered as a Document in the index as well. Go and try it out (if you have a create option of course).

Executing the Search

In this blog post Liferay's Search Portlet will be used to search and find your custom entities. You can always create a custom search portlet as well, check the Sources section for articles on how to achieve that. But for our customer we decided that integrating the custom JPA entity into Liferay's Search Portlet was the best solution as other content, e.g. Web Content, should also be searchable. By doing this the client had a nice integration with the default Liferay functionality. The Liferay Search Portlet will need to be updated to also take into account your entity by adding the portlet to a page and adjusting its configuration. But first check out the portlets normal behaviour. As an admin, add the portlet to a page of your choice. Search for the value of one of the earlier defined searchable fields in the Indexer and hit Search. Notice that there are no search results found and that this is clearly communicated.

Now go to the Configuration panel of the portlet, select Advanced and add the FQCN of your entity between quotes in the asset_entries list. Keep in mind the comma separation.

{ "displayStyle": "asset_entries", "fieldName": "entryClassName", "static": false, "data": { "frequencyThreshold": 1, "values": [ "com.liferay.portal.model.User", "com.liferay.portlet.bookmarks.model.BookmarksEntry", "com.liferay.portlet.bookmarks.model.BookmarksFolder", "com.liferay.portlet.blogs.model.BlogsEntry", "com.liferay.portlet.documentlibrary.model.DLFileEntry", "com.liferay.portlet.documentlibrary.model.DLFolder", "com.liferay.portlet.journal.model.JournalArticle", "com.liferay.portlet.journal.model.JournalFolder", "com.liferay.portlet.messageboards.model.MBMessage", "com.liferay.portlet.wiki.model.WikiPage", "be.olaertskoen.blog.search.Ticket" ] }, "weight": 1.5, "className": "com.liferay.portal.kernel.search.facet.AssetEntriesFacet", "label": "asset-type", "order": "OrderHitsDesc" },

Hit Save, close the panel and perform the same search again. Notice that the search result page has changed. It looks like the portlet found something but doesn't really know how to show it. Oh me oh my...

Probably this is not exactly what you wanted. Hell, it should be nothing you ever wanted! What use was all of this? Well, it was the preparation for what is to follow. If you followed the hints or tip above, you already know the indexer has done its work. Now it is time to actually use its results.

How about some usable search results?

You now know that your entities can be found but that the search results look nothing like you ever dreamed of. Next up you are going to get that party started and you should come to an end of this search functionality quest. As you probably know, or not if you are new to the game, but Liferay uses their Asset framework extensively. Not only in the Asset Publisher or for all of their web content, but also for their search results. Yes, that is right: the results in the search portlet are rendered using Assets. Therefor you will also need to create an Asset Renderer for your JPA entities. And more or less consistently you will need to register it as well. At last you will need to use it one way or another.

Creating an Asset Renderer

Similar as to the Indexer there is a base class you can easily extend, BaseAssetRenderer, to implement your own version. This class will give you a bunch of methods you will need to overwrite. One if them is to provide a summary for your entity, another is for the title. Two other important methods are render and getUrlViewInContext. The first one, render, will return a String with the url leading to the full content of your entity. It will use an Asset Publisher on a hidden page created for the Search Portlet. In our project we have an entire page reserved for this so we need to return the url of that detail page (here the page to edit the ticket).

@Override public String render(RenderRequest renderRequest, RenderResponse renderResponse, String template) throws Exception { return TicketDetailFriendlyUrl.create(ticket.getId()); }

The second method, getUrlViewInContext, will be used by the Search Portlet to render the asset in context. This is an option you can activate in the Search Portlet. The result is that the Asset Entry is shown in its own context unlike the previous where the entry is shown in an Asset Publisher. This is actually the default setting and according to me the nicest solution.

@Override public String getURLViewInContext(LiferayPortletRequest liferayPortletRequest, LiferayPortletResponse liferayPortletResponse, String noSuchEntryRedirect) throws Exception { ThemeDisplay themeDisplay = (ThemeDisplay) liferayPortletRequest.getAttribute(WebKeys.THEME_DISPLAY); Group group = themeDisplay.getScopeGroup(); boolean isPrivate = themeDisplay.getLayout().isPrivateLayout(); String groupUrl = PortalUtil.getGroupFriendlyURL(group, isPrivate, themeDisplay); return groupUrl + TicketDetailFriendlyUrl.create(ticket.getId()); }

Fabricating the Asset Renderer

For assets there is actually a required Factory to be used for the AssetRenderers. But again this is easy to achieve as Liferay provides a BaseAssetRendererFactory for you to extend. It contains three methods: one for the type, one for the class name (both methods for our entity class of course) and one method that returns an AssetRenderer. In that method you will need to create a new instance of your entity based on the incoming id and type. The latter was not required in our case as the renderer is only registered for Objects of type Ticket. @Override public AssetRenderer getAssetRenderer(long classPK, int type) throws PortalException, SystemException { TicketService ticketService = BeanLocator.getBean(TicketService.class); Ticket ticket = ticketService.getTicket(classPK); return new TicketAssetRenderer(ticket); } Registering the AssetRendererFactory

Unlike the Indexer you are not going to register the AssetRenderer but the AssetRendererFactory in your liferay-portlet.xml. Why else would you have created that class, right? It is easily performed by adding an asset-renderer-factory tag, containing the factory's FQCN, just before the instanceable tag.

<asset-renderer-factory>be.olaertskoen.blog.search.tickets.renderer.TicketAssetRendererFactory</asset-renderer-factory> Warning To actually get the portlet started, you will need to add a non-empty portlet.properties into the resources folder. We just added a property plugin.package.name.

If you redeploy your portlet, you will again see some changes in the search results.

Wait, where is the title? Where is the description? You coded that in the AssetRenderer, didn't you! Why are you not seeing any actual result? Why is the portal footer (that Powered By Liferay statement) suddenly shown in the search results? Well, Liferay would like to render an asset, but actually there is no asset. So it just can't render an Asset. If you would open your servers log, you will see an error message like this:

NoSuchEntryException: No AssetEntry exists with the key {classNameId=10458, classPK=1}

While you already had existing entities there were never any AssetEntries created for them unless you already implemented the Asset framework for some other reason. Hence you will need to use that framework into your entity creation process as well.

Creating Assets

You will need to adjust the add, update and delete code of your custom entities. Whenever a new entity is created, an AssetEntry needs to be created as well. When the entity is updated, the corresponding AssetEntry should be updated as well. And when an entity is deleted the corresponding AssetEntry should be deleted as well. No need for orphaned database entries! As you know, Liferay provides you with lots of LocalServices to use their framework. This is also the case for the Asset framework. When you create an instance of your entity you can use the AssetEntryLocalServiceUtil to create the corresponding AssetEntry using the following code:

String ticketFriendlyUrl = TicketDetailFriendlyUrl.create(ticket.getId()); Long userId = Long.valueOf(getExternalContext().getUserPrincipal().getName()); ThemeDisplay themeDisplay = getThemeDisplay(); Long groupId = themeDisplay.getScopeGroupId(); AssetEntryLocalServiceUtil.updateEntry(userId, groupId, ticket.getCreateDate(), ticket.getCreateDate(), Ticket.class.getName(), ticket.getId(), String.valueOf(ticket.getId()), 0, new long[] {}, new String[] {}, true, null, null, null, "text/html", ticket.getSubject(), ticket.getDescription(), ticket.getDescription(), ticketFriendlyUrl, null, 0, 0, null, false);

It is usually a good idea to do this right before the code you added to create the index entry for your entity. Now after you redeploy, you will need to create a new instance of your custom entity. For the existing ones there still are no AssetEntries present in Liferay. But when you create a new instance an AssetEntry will be created automagically (well at least we magicians now know the programmatics behind it). So go ahead and create a new instance, you will probably already have a portlet for that.

Alert As written earlier you will need to include similar code in your update and delete code for your entities. In this blog post and example portlet these have been left out.

Next try and search for it using one of the properties you declared in the Indexer class and… Tadaa!

Congratulations! You now have a good looking search result. Hm... Wait, it is not really 100%... Why is the FQCN shown as type of the entity? That does not look so nice! And it is even shown twice! Merde, will this journey never end?

Naming your entities

Be sure, our quest will soon come to an end. To provide your entities with a nice name you can again use Liferay's framework (how many rabbits are there in that hat?). All you need to do is add a translation for it, simple as that. If you look closely it is not really the FQCN of your custom entity that is shown in the Search Portlet, it is prepended with model.resource. You can add that entire output into a language hook as the key and provide it with a value (Superentity, Batentity, Spiderentity or whatever you want to name it).

model.resource.be.olaertskoen.blog.search.Ticket=Support Ticket

Deploy this language hook and reload your previous search.

Finally, it’s over

Isn’t that nice! You are using Liferay's Search Portlet to search for your own custom entities! What a journey that has been. But our customer was very satisfied with this functionality and I hope yours will be too.

With this you now know how to enable a full fledged search option for your custom JPA entities in Liferay. Just as a reminder, here is a short overview of the steps you took.

  • You started out with creating, registering and using an Indexer class that transforms your custom entities into Lucene Documents.
  • Next you configured the default Liferay Search Portlet to also take your custom JPA entities into account in the search queries. This was an easy configuration in the Portlet itself.
  • Last but not least you created, registered and used an AssetRenderer to nicely render the AssetEntry for your custom JPA entity in the Search Portlet. In this step you also added code to create and maintain that AssetEntry according to the lifecycle of your own entities.
Those three steps are all it takes to enable the search functionality for your custom JPA entities created in Liferay. Enjoy! You can find all the example code for this post in my Github project. Sources As mentioned at the start of this blog post, there are several articles and blog posts concerning search and what is required. But we needed to combine several of them to full enable all the features as described in this blog post. Below is a list of the Liferay sources we used. They are still worth reading if you want to know more on what lies behind the scenes or if you need to implement some other specific features.   Koen Olaerts 2016-06-30T20:32:02Z
Categories: CMS, ECM

Fun with Generic Content Templates

Liferay - Wed, 06/29/2016 - 01:14
I came across the idea of a generic template recently, and put it to good use.  If you don't know what I mean by generic template, let me clear that up right away.   A generic template is really just a content template that is not tied to a structure. The point of it is that you can sparate your template code as you see fit, including the generic templates in your main one. All you need to do is add this line of (velocity markup) code in your template. #parse ("$templatesPath/12345") where 12345 is the template key of your generic template. That's it. All the code in your generic template gets pulled into your main template and treated as one.   There! You know what I mean by generic templates now. So, let's talk about the fun I had with them. I've come to be used to velocity, so the sample code below is all vm.   Here's my main template. <h3>$txtPoemTitle.getData()</h3> <div> $htmlVerse.getData() </div> <div> $htmlAbout.getData() </div>   At a glance, you can tell I am displaying three fields:
  • a title
  • a poem (rich text)
  • a few comments about the poem (rich text)

 

Contrived requirement #1: Style it After one minute of frenzied googling, I have this: <style> .poem-body {     float: left;     font-size:16px;     color: #989898;     background-color: #B7E2F0;     border: solid 1px #B7E2F0;     border-radius: 5px/5px;     -webkit-box-shadow:0 23px 17px 0 #B7E2F0;     -moz-box-shadow:0 23px 17px 0 #B7E2F0;     box-shadow: 0 23px 17px 0 #B7E2F0;     }   .about-poem {     float: left;     margin:15px;     font-size: 16px;     font-style:italic;     background-color: #efefef;     color: #555555; } </style>

<h3>$txtPoemTitle.getData()</h3> <div class="poem-body"> $htmlVerse.getData() </div> <div class="about-poem"> $htmlAbout.getData() </div>

Calm down! Ugly, inefficient, but as I said, contrived. I'm just trying to make a point here.   All good. Now, let's move the styling into its own template - a generic template - one without a structure association. Now the main template looks like this:   #parse ("$templatesPath/73906") <h3>$txtPoemTitle.getData()</h3> <div class="poem-body"> $htmlVerse.getData() </div> <div class="about-poem"> $htmlAbout.getData() </div>   73906 is indeed the template key of my generic template, as shown below.       Contrived requirement #2: Create an alternate style for our poem template. A fiery red sonnet style. It looks very similar to the cool blue verse style, just some different colors. <style> .poem-body {     float: left;     font-size:16px;     color: #ffffff;     background-color: #CC0033;     border: solid 1px #CC0033;     border-radius: 5px/5px;     -webkit-box-shadow:0 23px 17px 0 #CC0033;     -moz-box-shadow:0 23px 17px 0 #CC0033;     box-shadow: 0 23px 17px 0 #CC0033;     }   .about-poem {     float: left;     margin:15px;     font-size: 16px;     font-style:bold;     background-color: #efefef;     color: #555555; } </style>   Now, I can simply change my generic template include in my main template as below to reference the alternate template. #parse ("$templatesPath/73945") <h3>$txtPoemTitle.getData()</h3> <div class="poem-body"> $htmlVerse.getData() </div> <div class="about-poem"> $htmlAbout.getData() </div>   Contrived requirement #3: Let the user pick which style to apply This is slightly more involved. I modify my poem structure to have, in addition to the original three fields, a select field named Mood with three options, as shown below. Take note of the values of those options.   Alright! Now over to my main template to use the value from this field. #parse ("$templatesPath/$selectMood.getData()") <h3>$txtPoemTitle.getData()</h3> <div class="poem-body"> $htmlVerse.getData() </div> <div class="about-poem"> $htmlAbout.getData() </div>   And we're done. Here's my content with Mood options. The screenshots that follow show the rendered results.   The Content Item:   Selecting Cool Blue Verse:   Selecting Fiery Red Sonnet:   In Conclusion If you don't see this as useful here, you're probably thinking: why go to all this trouble instead of creating an alternate content template altogether and just use that? Well, let me conclude by highlighting what we accomplished.
  1. Separation of code. We separated the template code between a main template and one or more generic templates. Sure, we just did css in the above example, but these are first clas velcity templates; they could have anything a velocity template can have - css, html, javascript, server-side calls into Liferay's universe of APIs. Power!
  2. Reuse. DRY principle and all. Each generic template is now usable in other main templates.
  3. User-empowerment. By adding a select box to the structure, we've now given the user the ability to switch the generic template that gets used. This makes for some useful indirection. 

Quick FYI on template keys: they are maintained through a LAR export/import, just in case you were wondering. 

 

Happy Fourth, America! To the rest of the world, I hear Jeff Goldblum will be peddling USB drives on an alien mothership once again. If you don't get that reference, consider yourself blessed and enjoy your July anyhow. Javeed Chida 2016-06-29T06:14:27Z
Categories: CMS, ECM
Syndicate content