AdoptOS

Assistance with Open Source adoption

ECM

Getting Started with Building Liferay from Source

Liferay - Thu, 05/18/2017 - 18:03

When a new intern onboards in the LAX office, their first step is to build Liferay from source. As a side-effect of that, usually the person that handles the intern onboarding will see that the reported ant all time is in hours rather than in minutes, and then start asking people, "Is it normal for it take this long to build Liferay for the first time?"

It happens frequently enough that sometimes my hyperactive imagination supposes that a UI intern's entire first day must be spent cloning the Liferay GitHub repository and building it from source, and then going home.

In hindsight, this must be what the experience is like for a developer in the community looking to contribute back to Liferay as well. Though, rather than go home in a literal sense, they might stare at the source code they've downloaded and built (assuming they got that far) and think, "Wow, if it took this long just to get started, it must be really terrible to try to do more than that," and choose to go home in a less literal way, but with more dramatic flair.

One of the many community-related discussions we have been having internally is how we can make things better, both internally and externally, when it comes to working with Liferay at the source code level. Questions like, "How do we make it easier to compile Liferay?" or "How do we make it easier to debug Liferay?" After all, just how open source are you if it's an uphill battle to compile from source in order find out if things are already fixed in branch?

We don't have great answers to these problems yet, but I believe that we can, at a minimum, provide a little more transparency about what we are trying internally to make things better for ourselves. Sharing that information might give all of us a better path forward, because if nothing else, it lets us ask important questions about the pain points rather than bikeshed color ones.

Step 1: Upgrade Your Build System

Let's say I were to create a survey asking the question, "Which of the following numbers best describes your build time on master in minutes (please round up)?" and gave people a list of options, ranging from 5 minutes and going all the way up to two hours.

This question makes the unstated assumption that you are able to successfully build master from source, because none of the options is, "I can't build master from source." Granted, it may seem strange that I call that an "assumption", because why would you not be able to build an open source product from source?

Trick question.

If you've seen me at past Liferay North America Symposiums, and if you were really knowledgeable about Dell computers in the way that many people are knowledgeable about shoes or cars, you'd know that I've been sporting a Dell Latitude E6510 for a very long time.

It's a nice machine, sporting a mighty 8 GB of RAM. Since memory is one of the more common bottlenecks, this made it at least on-par with some of the machines I saw developers using when I visited Liferay clients as a consultant. However, to be completely honest, a machine with those specifications has no hope of building the current master branch of Liferay without intimate knowledge of Liferay build process internals. Whenever I attempted to build master from source without customizing the build process, my computer was guaranteed to spontaneously reboot itself in the middle.

So why was this not really a problem for other Liferay developers?

Liferay has a policy of asking its developers to accept upgrades to their hardware once every two to three years. The idea is that if new hardware increases your productivity, it's such a low cost investment that it's always worthwhile to make. A handful of people resist upgrades (inertia, emotional attachment to Home and End keys, etc.), but since almost everyone chooses to upgrade, Liferay has an ivory tower problem, where much of Liferay has no idea what it's like to even start up Liferay on an older machine, not to even discuss what it's like to compile Liferay on those older machines.

  • Liferay tries to do parallel builds, which consumes a lot of memory. To successfully build Liferay from source, a dedicated build system needs 8 GB of memory, while a developer machine with an IDE running needs at least 16 GB of memory.
  • Liferay writes an additional X GB every time you build, a lot of it being just copies of JARs and node_modules folders. While it will succeed on platter drives, if you care about build time, you'll want Liferay source code to live on a solid-state drive to handle the mass file creation.

So eventually, I ran into a separate problem which required a computer upgrade, I needed to run a virtual machine that itself wanted 4 GB, and so that combined with running Liferay alongside an IDE meant my machine wasn't up to handling my task. After upgrading, the experience of building Liferay is substantially different from how it used to be. While I have other problems like an oversensitive mousepad, building Liferay is no longer something that made me wonder what else could possibly go wrong.

If you weren't planning on upgrading your computer in the near future, it doesn't make sense to upgrade your computer just to build Liferay. Instead, consider spinning up a virtual machine in a cloud computing environment that has the minimum requirements, such as something in your company's internal cloud infrastructure, or even a spot instance on Amazon EC2. Then you can use those servers to perform the build and you can download the result to your local computer.

Step 2: Clone Central Repository

So let's assume you've got a computer or virtual machine satisfying the requirements listed above. The next step is to get the source code so you can use this machine to build Liferay from source.

The first step that interns get hung up on is cloning the Liferay repository. If you've ever tried to do that, you'll find that Liferay has violated one of the best practices of version control and we've committed a large folder full of binary files, .gradle. As a result of having this massive folder, GitHub sends us angry emails and, of course, cloning our repository takes hours.

How does Liferay make this better internally? Well, in the LAX office, the usual answer is to plug in the ethernet cable. Liferay invested heavily in fast internet, and so simply plugging in the ethernet cable makes the multi-hour process finish in 30 minutes.

However, it turns out that there is actually a better answer, even in the LAX office. Each office has a mirror that holds archives of various GitHub repositories, including liferay/liferay-portal. We suspect the original being mirrored is maintained by Quality Assurance, because we have heard that keeping all of our thousands of automated testing servers in sync used to result in angry emails from GitHub. Since it's an internal mirror, this means that downloading X GB and unzipping it takes a few minutes, even over WiFi, and it's on the order of seconds if you plug in your ethernet cable.

So, in order to improve our internal processes, we've been trying to get the people who manage our new hires and new interns to recognize that such a mirror exists and to use it during their onboarding process to save a lot of time for new hires on their first day.

So what does this mean for you?

Essentially, if you plan to clone the code directly onto your computer for simplicity, you'll need to make sure that it's during a time where you won't shut down the computer for a few hours and when you don't need it for anything (maybe run it overnight), because it's a time-consuming process.

Alternately, have a remote server perform the clone, and then download an archive of the .git folder to your local computer, similar to what Liferay is trying to do internally. This will free up your machine to do useful things, and even spinning up Amazon EC2 spot instances (like an m1.small) and bringing things down with either SCP or an S3 bucket as an intermediate point may be beneficial.

Step 3: Build Central Repository

The next step is your first build from source. This is done with a single command that theoretically handles everything. However, before you run this single command, you might need to do things to reduce the number of resources it consumes.

  • Liferay issues a lot of requests to the NPM registry in parallel builds. You can cap this by checking build.properties for nodejs.npm.args, and taking the commented out line and adding it to your own build.USERNAME.properties.
  • Liferay includes a lot of extra things most people never need. You can remove these by checking build.properties for build.include.dirs and using its commented out value in your build.USERNAME.properties, or adjusting it for your needs if you want more than what it tries by default.
  • If you're on Windows, disable Windows Defender (or at least disable it on specific folders or drives). The ongoing scan drastically slows down Liferay builds.

After you've thought through all of the above, you're ready for the command itself, which requires that you download and install Apache Ant, and after knowing that this is what I'm asking you to download, you might also realize that this means that the entry point for everything is build.xml.

ant all

So now you've built the latest Liferay source code, right?

Another trick question!

What's in the master branch of liferay-portal is actually not the latest code. Liferay has started moving things into subrepositories, which you can see from the hundreds of strangely named repositories that have popped up under the Liferay GitHub account.

However, a lot of these repositories are just placeholders. These placeholders are in what's called "push" mode, where code from the liferay-portal repository is pushed to the subrepository. However, a handful of them (five at the time of this writing) are actually active where they're in what's called "pull" mode, where code is pulled from the subrepository into the liferay-portal repository on-demand. You know the difference by looking at the .gitrepo file in each subrepository and checking the line describing the mode.

However, because all of those files are actually also on the central repository, after you've cloned the liferay-portal repository, you can use the files there to find out which subrepositories are active with git, grep, and xargs magic run from the root of the repository.

git ls-files modules | grep -F .gitrepo | xargs grep -Fl 'mode = pull' | xargs grep -h 'remote = ' | cut -d'=' -f 2

I will dive into more detail on the subrepositories in a later entry when we talk about submitting fixes, but for now, they're not relevant to getting off the ground running other than an awareness that they exist, and an awareness that additional wrinkles exist in the fix submission process as a side-effect of their existence.

Step 4: Choose an IDE

At this point, you've built Liferay, and the next thing you might want to do is point an IDE with a debugger to the artifact you've built, so that you can see what it's doing after you start it up. However, if you point an IDE to the Liferay source code in order to load the source files, whether it's Netbeans, Eclipse, or IntelliJ, you'll notice that while Liferay has a lot of default configurations populated, but these default files are missing about 90% of Liferay's source folders.

If you're using Netbeans, given the people who have forked the Liferay Source Netbeans Project Builder overlap exactly with the team I know for sure uses Netbeans, this tool will help you hit the ground running. Since there are recent commits to the repository, I have confidence that the Netbeans users team actively maintains it, though I can't say with equal confidence how they'll react to the news that I'm telling other people about it.

If you're using Eclipse or Liferay IDE, then Jorge Diaz has you covered with his generate-modules-classpath script, which he has blogged about in the past, and his blog post explains its capabilities much clearly than I would be able to in a mini-section at the end of this getting started guide.

If you're using IntelliJ IDEA Ultimate, you can take advantage of the liferay-intellij project and leave any suggestions. It was originally written as a streams tutorial for Java 7 developers rather than as a tool, and I still try to keep it as a streams tutorial even as I make improvements to it, but I'm open to any improvement ideas that make people's lives easier in interacting with Liferay core code.

Step 5: Bend Liferay to Your Will

So now that everything is setup for your first build, and you're able to at least attach a debugger to Liferay, the next thing is to explain what you can do with this newly-discovered power.

However, that's going to require walking through a non-toy example for it to make sense, so I'll do that in my next post so that this one can stay as a "Getting Started" guide.

Minhchau Dang 2017-05-18T23:03:48Z
Categories: CMS, ECM

Forms in DXP

Liferay - Thu, 05/18/2017 - 12:10

As we all know, Liferay 7 (DXP) come with new features/improvements including functional/architectural. One of them is forms which is an improved version of Kaleo Forms, DDL. Kaleo forms and DDL is having limited capabilities and good for few use cases.

DXP now provides simple way for content creators to create forms. It is much like adding content on the page. Its easy to modify any field/attribute of the form by publishing it again. There are lot of extra features like: Multisteps form, multi layout, validation, data providers etc, will explain them with a real example using Registration form :

1. You can add new forms by navigation to Menu > Content > Forms. Adding a new form name as “Registration form”. Adding fields need to capture for registration :

 

 

2. Each field comes with configuration attributes, for example text field has these attributes :

Label:  Field Label

Help Text: Help text to explain user, what this field about.

Single Line/Multiple Line: Define if field value can contain single line or multiple lines.

Required: Mandatory fields

Predefined value: Default value of field

Placeholder text: Text to assist user, not submitted actually.

Field Visibility Expression: Condition to display a field

Enable Validation: Extra validation like contains/does not contain URL, email etc for text and less than, greater than, equals etc for numbers. Default is false

There are functions and operators for your use in your field visibility expressions like between, equals, sum etc.

Show Label: If you don’t want to show label to end user, by default its true.

Repeatable: If field is repetitive. Default is false

3. By adding few fields over first page of registration, I am done with my basic registration fields but now I need to add few more fields related to education over next page “Education details” as adding all fields over same page is not User friendly. DXP forms provide multistep forms where you can achieve this :

 

4. Now in “Education Details” screen, I need University list from where employee has completed his education. I can define that list using basic select dropdown but I need it to be dynamic so that dropdown will be always updated with new Universities. Here DXP provide “Data Providers”, using that you can populate select box values dynamically. You can provide REST data providers and define display and stored JSON attribute. You can create “Data providers” using top most icon (right side) > Data providers.

5. Now this data provider can be used in “Registration form” to display university detail field.

6. Other features of the forms :

Captcha: You can enable Captcha for form submission

Redirect URL on success:  Landing page after successful form submission

Storage type (json by default): How to store form into DB, can write custom method for storing form data.

Workflow: kaleo workflow definition to be used by form

Email notification: You can configure email trigger after form submission.

View Entries: You can see all submitted application specific to form.

7.  Multi layout is also configurable by dragging out fields:

Forms vs DDL :

 

 

ANKIT SRIVASTAVA 2017-05-18T17:10:24Z
Categories: CMS, ECM

ServiceBuilder and Upgrade Processes

Liferay - Wed, 05/17/2017 - 22:37
Introduction

Today I ran into someone having issues with ServiceBuilder and the creation of UpgradeProcess implementations. The doco is a little bit confusing, so I thought I'd do a quick blog post sharing how the pieces fit...

Normal UpgradeProcess Implementations

As a reminder, you register UpgradeProcess implementations to support upgrading from, say, 1.0.0 to 2.0.0, when there are things that you need to code to ensure when the upgrade is complete that the system is ready to use your module. Say, for example, that you're storing XML in a column in the DB and in 2.0.0 you've changed the DTD; for those folks that already have 1.0.0 deployed, your UpgradeProcess implementation would be responsible for processing each existing record in the database to change the contents over to the 2.0.0 version of the DTD. For non-ServiceBuilder modules, it is up to you to write the initial UpgradeProcess code for the 0.0.0 -> 1.0.0 version.

Through the lifespan of your plugin, you continue to add in UpgradeProcess implementations to handle the automatic update for dot releases and major releases. The best part is that you don't have to care what version everyone is using, Liferay will apply the right upgrade processes to take the users from what version they're currently at all the way through to the latest version.

This is all good, of course, but ServiceBuilder, well it behaves a little differently.

ServiceBuilder service.xml Development

As you go through development and you change the entities in service.xml and rebuild services, ServiceBuilder will update the SQL files used to create the tables, indices, etc. When you deploy the service the first time, ServiceBuilder will happily identify the initial deployment and will use the SQL files to create the entities.

This is where things can go sideways... If I deploy version 1.0.0 of the service and version 2.0.0 comes out, the service developer needs to implement an UpgradeProcess that makes the necessary changes to the tables to get things ready for the current version of the service. If you did not deploy version 1.0.0 but are starting out on 2.0.0, you don't want to have to execute all of the upgrade processes individually, you want ServiceBuilder to do what it has always done and just use the SQL files to create the version 2.0.0 of the entities.

So how do you support both of these scenarios correctly?

By using the Liferay-Require-SchemaVersion header¹ in your bnd.bnd file, that's how.

Supporting Both ServiceBuilder Upgrade Scenarios The Liferay-Require-SchemaVersion header defines the current DB schema version number for your service modules. This version number should be incremented as you change your service.xml in preparation for a release.

There's code in the ServiceBuilder deployment which injects a hidden UpgradeProcess implementation that is defined to cover the "0.0.0" version (the version which represents the "new deployment") to the Liferay-Require-SchemaVersion version number.  So your first release will have the header set to 1.0.0, next release might be 2.0.0, etc.

So in our previous example with the 2.0.0 service release, when you deploy the service Liferay will match the "0.0.0" to "2.0.0" hidden upgrade process implementation provided by ServiceBuilder and will invoke it to get the 2.0.0 version of the tables, indices, etc. created for you using the SQL files.

The service developer must also code and register the manual UpgradeProcess instances that support the incremental upgrade. So for the example, there would need to be a 1.0.0 -> 2.0.0 UpgradeProcess implementation so when I deploy 2.0.0 to replace my 1.0.0 deployment, the UpgradeProcess will be used to modify my DB schema to get it up to version 2.0.0.

Conclusion

As long as you properly manage both the Liferay-Require-SchemaVersion header in the bnd.bnd file and provide your corresponding UpgradeProcess implementations, you will be able to easily handle the first time deployment as well as the upgrade deployments.

An important side effect to note here - you must manage your Liferay-Require-SchemaVersion correctly.  If you set it initially to 1.0.0 and forget to update it on future releases, your users will have all kinds of issues.  For initial deployments, the SQL scripts would create the entities using the latest SQL files and then try to apply UpgradeProcess implementations to get to new versions trying to make modifications they really don't have to worry about. For upgrade deployments, Liferay may not process upgrades because it believes the schema is already at the appropriate version.

¹ If the Liferay-Require-SchemaVersion header is missing, the value for the Bundle-Version will be used instead.

David H Nebinger 2017-05-18T03:37:48Z
Categories: CMS, ECM

¿Hablan los bancos en idioma millennial?

Liferay - Tue, 05/16/2017 - 03:56

Ya nadie duda de que los nacidos entre 1980 y hasta mediados de los 90 son una generación particular que ha revolucionado lo que la tecnología nos ofrece en el presente. Los millennials representan un 20% de la población española, son el mayor segmento de propietarios de smartphones, y responder hoy en día a sus expectativas supone un desafío particular para las grandes entidades bancarias.  El reto millennial ha marcado, sin duda, el pulso de la transformación digital de nuestra sociedad y ha proyectado conceptos como omnicanalidad, personalización, o experiencia de usuario, ante los que la banca tradicional no ha podido sustraerse. Y sin casi tiempo para finalizar el proceso de transformación, están incorporándose al mercado un nuevo tipo de consumidor: la Generación Z. Los nacidos a partir de 1995 es la generación de nativos digitales más puros y supera en número a los millenials. Son ya el 25% de la población mundial. Han crecido empoderados por el acceso ilimitado a información y tienen habilidades innatas para influir en otras personas a través de medios on-line.

Los millennials eran, hasta hace poco tiempo, adolescentes o jóvenes estudiantes que se desenvolvían en un entorno extremadamente tecnológico, conectado y móvil. Pero han crecido, y son ahora una inmensa población de consumidores y usuarios, con demandas concretas, comportamientos y expectativas que las marcas y empresas no pueden dejar de atender. De la misma forma, los primeros miembros de la Generación Z empiezan a incorporarse al mundo laboral y tendrán un amplio impacto en la sociedad y las empresas.

La banca tradicional ha vivido todo este proceso de cambio empujada a transformarse, a reinventar sus servicios, sus productos y ofrecer herramientas tecnológicas y canales que nunca antes pudo imaginar. Según un estudio reciente de Goldman Sachs, el 33% de los millennials cree que en un futuro próximo ya no necesitará de los bancos, y de acuerdo con Good Rebels, un 66% de los jóvenes Z considera que los bancos no se preocupan por su generación debido a su nivel de ingresos bajos. ¿Qué buscan y no encuentran ambas generaciones de hoy en su relación con las entidades financieras? ¿Está la banca española lejos o cerca de satisfacer a esta nueva generación de consumidores? En cualquier caso, se trata de una oportunidad para que el sector financiero siga transformándose y consiga que esos porcentajes cambien en el futuro.

Entendiendo al millennial y al joven Z

Estas generaciones demandan una comunicación diferente. Quieren que sean las entidades las que les busquen y el acceso desde cualquier dispositivo para realizar cualquier tipo de transacción, desde consultas de movimientos a realizar pagos. Y es que como afirma Brett King, autor de Breaking Banks, el 80% de los jóvenes no pisará jamás una oficina bancaria y toda su relación con los bancos se realizará a través del teléfono móvil.

Los canales de acceso importan cada vez más y las prioridades de los clientes de hoy ya no son las mismas que las de los de ayer. El número de usuarios de banca móvil en el mundo no deja de crecer desde 2008 y la tendencia apunta a que este crecimiento continuará por lo menos hasta 2019. Los millennials son el mayor segmento de propietarios de smartphones y más de la mitad de ellos espera una total consistencia a través de los diferentes canales. ¿Cómo conectarán los bancos con todos estos clientes reales y potenciales si no es a través de un modelo omnicanal, consistente y sin fisuras? ¿Son y serán capaces de desarrollar aplicaciones e innovar, independientemente del canal, que faciliten el engagement con estas generaciones?

Porque todos los clientes, no sólo los jóvenes, quieren que sus experiencias con los bancos sean consistentes, y quieren tener el poder de hacer lo que quieran y cuando quieran. Esperan un acceso agregado a sus cuentas, y quieren poder moverse sin problemas entre canales, por lo que es imperativo entender quiénes son y cómo es su “viaje”. Habría que hacerse preguntas como: ¿Combina el cliente la web, el móvil y la sucursal para acceder a sus cuentas y tomar decisiones? ¿Qué canal utiliza primero? ¿Qué canal convierte más? ¿Con qué frecuencia accede a cada canal?

La banca omnicanal

Si bien el paso a la omnicanalidad no es algo simple para el banco, presenta muchas oportunidades y aporta una gran ventaja competitiva, porque permite analizar la información de cada canal, ajustar la experiencia de los clientes y ofrecer contenidos basados en datos de compra. El banco puede conformar así una imagen detallada y precisa de las preferencias, hábitos y comportamiento del cliente.

La banca necesita invertir en tecnología, recursos, formación y reciclaje de los empleados para poder servir a los nuevos consumidores como ellos esperan. La entidad va a operar de manera diferente, y necesitará eliminar los silos a nivel operativo y organizacional. Además, es necesaria, la educación continua de los empleados a lo largo de la organización para superar los modelos centrados en el producto y orientarse a los jóvenes usuarios y sus necesidades.

Los bancos, hoy en día, son cada vez más conscientes de que si son capaces de ofrecer una experiencia perfecta a la hora de acceder y administrar las finanzas personales de sus clientes, aumentarán sus tasas de fidelización, y también acelerarán el ritmo de captación de nuevos clientes.  De nada sirve invertir millones de euros en publicidad para captar nuevos usuarios si no se optimizan y adaptan correctamente los productos y servicios que se les ofrecen, si no se innova y si no se tiene la capacidad para desarrollar las aplicaciones necesarias que faciliten el acceso a esos productos y servicios.

La recompensa de llevar a cabo este cambio es elevada: poder llegar a todos los clientes, incluida las exigentes generaciones millennial y Z, y hacerlo en cualquier momento y en cualquier lugar y con capacidad de personalización en todos los frentes. La mitad de los jóvenes creen a día de hoy que las startups tecnológicas alcanzarán a los bancos. Lo que deben hacer los bancos tradicionales es cambiar esa percepción a base de una mejora constante de la experiencia de usuario, hablar su mismo idioma y escucharlos para poder lanzar nuevos productos y servicios que se adapten a sus necesidades. Es fácil decirlo, el reto ahora está en hacerlo. 

Descargar Whitepaper sobre Omnicalidad en la Banca >

Maria Sanchez 2017-05-16T08:56:47Z
Categories: CMS, ECM

Liferay DXP lifecycle events: chaos vs. order

Liferay - Sun, 05/14/2017 - 07:12

Historically there have been a number of extension points in Liferay that enable developers to hook into portal events and add their own custom/additional behaviour. In Liferay DXP this is still the case and the list below shows when certain events are fired and in which order. You’ll notice that a number of events are mentioned multiple times because they’re shown in the context of a specific action instead of just as a straightforward list of all the available event types:

  • Once during a server start/stop cycle:
    • global.startup.events
    • application.startup.events (1 for every portal instance ID)
    • application.shutdown.events (1 for every portal instance ID)
    • global.shutdown.events
  • For every login:
    • servlet.service.events.pre
    • servlet.session.create.events
    • servlet.service.events.post
    • login.events.pre
    • login.events.post
  • For every logout:
    • servlet.service.events.pre
    • logout.events.pre
    • servlet.session.destroy.events
    • logout.events.post
    • servlet.service.events.post
  • For every every HTTP request:
    • servlet.service.events.pre
    • servlet.service.events.post
  • For every page that is updated:
    • servlet.service.events.pre
    • layout.configuration.action.update
    • servlet.service.events.post
  • For every page that is added/updated:
    • servlet.service.events.pre
    • layout.configuration.action.delete
    • servlet.service.events.post

Even with this many extension points there are a lot of cases where you would want to add more than 1 additional custom behaviour to an event. Because developers like to create small, dedicated modules, instead of putting everything together in 1 big module, it is also important to be able to have these custom event extensions, each in their own class/module, run in a certain specific order. Otherwise the overall behaviour would be pretty random and erratic.

In Liferay 6.2 this was pretty simple. You just had a number of properties in your portal-ext.properties that contained a comma separated list of implementation classes that would be run in the order they were present in this list, e.g.:

login.events.post=com.liferay.portal.events.ChannelLoginPostAction,com.liferay.portal.events.DefaultLandingPageAction,com.liferay.portal.events.LoginPostAction

This was a very simple and easy to understand way of doing this. When I tried to do the same in Liferay DXP it quickly became clear that with the switch to OSGI it now didn’t work this way anymore. A single custom event is now a standalone OSGI component, e.g.: https://github.com/liferay/liferay-blade-samples/tree/master/maven/blade.lifecycle.loginpreaction, and I couldn’t find any place where you could still define a list like I did before. So from the example it wasn’t exactly clear how the order of the components could be influenced.

Initial testing showed that the order seemed to be determined by when the bundle was started. I tested this by making 2 small bundles that each contained a dummy implementation of the same lifecycle event, but each outputted a different message. After installing and starting both bundles I got the messages I expected in the order the bundles were started. Then I stopped and started the bundle that was started first and triggered the event again and the order of the messages was the other way around.

Because I really don’t want this kind of behaviour, but something more deterministic, I was hoping that there was some sort of solution for this. This was when I remembered something about service ranking. In order to override a default Liferay service/component you just need to implement the correct interface and provide your implementation (component) with a service.ranking value thats larger that the one present on the original Liferay implementation. This same property can also be seen in the the Blade sample that shows how to add/override JSPs:

... @Component( immediate = true, property = { "context.id=BladeCustomJspBag", "context.name=Test Custom JSP Bag", "service.ranking:Integer=100" } ) public class BladeCustomJspBag implements CustomJspBag { ... }

The documentation of the cases above always talks about using the service.ranking property to find a single implementation for something. The highest ranked implementation wins. So now I was wondering if all these implementations are just kept in a sorted list somewhere? If this were to be the case then you’d expect Liferay to retrieve all the deployed implementations for a certain lifecycle event and use this sorting to be able to run them in the correct order. So I quickly created a bunch of custom event implementations for the same event and gave them different service.ranking values, created a bundle, installed & started it. To my surprise I got the behaviour I wanted! It seems the higher the service.ranking value on my custom event implementation, the earlier in the chain it was executed.

A quick debug and code inspection session did indeed show that EventsProcessorUtil uses ServiceTrackers to retrieve all the registered implementations for a lifecycle event and gets back a sorted list. This list is retrieved from a ListServiceTrackerBucket that internally keeps a sorted list of ServiceReferences. The ServiceReference implementation is Comparable and the comparison is based on the value of the service ranking and in the case the ranking is identical on the service ID.

The example code can be found on my Github: https://github.com/planetsizebrain/event-order-hook

Jan Eerdekens 2017-05-14T12:12:25Z
Categories: CMS, ECM

WannaCry ransomware is on the rise

Liferay - Sat, 05/13/2017 - 12:33

This cyber-attack has already been called the biggest in history. It targeted more than 75 countries and infected more than 250,000 computers. WannaCry ransomware has no mercy, hospitals, railways, government agencies, homes users – are all under attack.

In Russia, the attack was the most massive. The messages that are coming now resemble reports from computer fronts. Attempts were made to break into the Central Bank, the Ministry of Internal Affairs, and telco companies.

WannaCry has a clear interface and the text in ransom notes is translated into 26 languages. You only have three days to pay to get a decryptor. This malicious program that encrypts your files demands from 300 to 600 USD pain in Bitcoins.

There were no vaccines for a computer virus in any of the forty British clinics that were attacked first, or in the largest Spanish telecommunications company called Telefonica.  In one of the seven monitoring centers of Deutsche Bahn, a German railway carrier, the traffic control system also stopped working. The consequences could be catastrophic.

"This is all done by organized crime merely to earn some money. There is no political motivation or hidden motives, clean blackmail, "- says an expert on antivirus company Ben Rapp.

WannaCry infects the computer if the user opens a suspicious email. Most infections happen when Windows is not being updated regularly. This is clearly seen in the example of how seriously WannaCry affected China - the inhabitants of this country have a special love for pirateted copies of Windows.

If a company does not have a backup, it may lose access to the data. That is, for example, if the hospital patients’ database is kept on the server where the virus managed to get access, the hospital will not restore this data anymore. All backups should be kept on a separate offline storage.

So far, as found out by bloggers, hackers have not more than 5,000 USD in their electronic Bitcoin wallet. Given the long list of victims and the cost of data on their hard drives it is clear the amount hackers received so far is disparate.

The British edition of the Financial Times reports that WannaCry ransomware is nothing more than a malicious program stolen by the attackers from the US National Security Agency. It was earlier created in order to penetrate into specific US computer networks. The same was confirmed by the former NSA employee Edward Snowden.

From Snowden's Twitter: "Wow, the NSA's decision to create tools to attack US software now threatens the lives of hospital patients."

WikiLeaks, however, also repeatedly warned that because of maniacal desire to monitor the whole world, US intelligence services are spreading malicious programs. But even if this is not the case, the question still arises as to how NSA software was actually stolen by hackers.

Nevertheless, the true scale of this attack has yet to be assessed. The propagation of WannaCry infection continues all around the world. It is important not to open suspicious attachments, backup your data and update your software. In case you are already infected, try this guide. Infosec experts warn - there will be more ransomware waves. The frequency and scale of such cyber-attacks will only grow.

Madeline Dickson 2017-05-13T17:33:10Z
Categories: CMS, ECM

Tutorial of Using Promise Object in SOY Portlet to Access Liferay's web service.

Liferay - Fri, 05/12/2017 - 23:18
In this tutorial, I am going to talk about how to utilize promise object to access web service in Liferay DXP.   Before we get into more detail, please take a note that the version Liferay I am using is Liferay DXP DE-15, since there's big soy development experience improvement in this patch.   Previously we have talked about how to create a soy portlet, how to use a 3rd party js lib(ChartJS) in SOY portlet, and how to use Liferay Service Builder to create a remote service(web service) in Liferay.   In this tutorial, we are going to put all these knowledge together to make a portlet that can visualize data from a web service. We have talked about making a Service Builder remote services. Liferay DXP serves these services as a service provider. And now we are using Liferay as a service consumer to call these services. These services is not limited in Liferay platform, they can be served by any system. This is a way of integrating with 3rd party system. The 3rd party system is good at their own job in their business domain. Liferay is good at providing a connected and consistent user experience regardless where the data comes from and how users choose to engage with the business.   Matching to real world requirement, the approach in this tutorial can satisfy requirement like:
  • User Dashboard that the data comes from 3rd part systems.
  • Responsive(Liferay's bootstrap)
  • Plug and play(Microservice)
  • Abundant technologies option(NPM)
  Let's Go!     Step 1 - User the knowledge you already have to create a chart JS portlet   As we have done in the previous articles, let's create a new chartjs soy portlet project called monthly-trading-web. But this time, we can modify package.json a little bit. Previously the metal-cli npm plugin has some CRLF compile issue. This issue has been resolved in the latest release of metal-cli. So the good news is you don't need to replace the CRLF with LF anymore. You just need to make sure your metal-cli's version is 4.0.1. { "dependencies": { "metal-component": "^2.10.0", "metal-soy": "^2.10.0", "chart.js": "^2.4.0" }, "devDependencies": { "liferay-module-config-generator": "^1.2.1", "metal-cli": "^4.0.1" }, "name": "monthly-trading", "version": "1.0.0" } Thanks to my friend Chema Balsas and Liferay UI team for the effort again!   Step 2 - Find your service and mockup data As we have done in the previous tutorial, we can add some mockup data through /api/jsonws. The service context is Banking. The method name is add-monthly-trading.   After adding mockup data. You can click on the get-monthly-trading-by-year and then input the year. Then click Invoke button, and then click on the URL Example tab to get the service URL.     Take a note of the p_auth. This is an auth token for authentication.   Step 3 - Pass the Service URL to JS In the render method of portlet class, we can put the url into the template so that the soy and es.js can receive the variable.   public void render( RenderRequest renderRequest, RenderResponse renderResponse) throws IOException, PortletException { String tradingYear = "2017"; String pauth = AuthTokenUtil.getToken(PortalUtil.getHttpServletRequest(renderRequest)); String portletNamespace = renderResponse.getNamespace(); template.put("remoteURL", "/api/jsonws/banking.monthlytrading/get-monthly-trading-by-year/year/" + tradingYear + "?p_auth=" + pauth); template.put("tradingYear", tradingYear); template.put("portletNamespace", portletNamespace); super.render(renderRequest, renderResponse); }   In the es.js, we can receive the variable in constructor method.   constructor(opt_config) { super(opt_config); let remoteURL = opt_config.remoteURL; let tradingYear = opt_config.tradingYear; this.portletNamespace = opt_config.portletNamespace; this.createRemoteChart_(remoteURL, tradingYear); // Hasn't been defined yet. }   Step 4 - Make a promise What is a Promise? "A Promise is a proxy for a value not necessarily known when the promise is created. It allows you to associate handlers with an asynchronous action's eventual success value or failure reason. This lets asynchronous methods return values like synchronous methods: instead of immediately returning the final value, the asynchronous method returns a promise to supply the value at some point in the future." --MDN   In SOY portlet we can utilize ES6 to use Promise project. In our es.js file this is how we define and return a promise object. First you need to import the Promise object:   import { CancellablePromise } from 'metal-promise/src/promise/Promise'; This is using metaljs' cancellable promise.   And then you can define a method to define and return the promise object:   /** * Get remote trading data * @protected * @param {String} remoetURL * @return {CancellablePromise} A promise that will resolve save permise */ getChartData_(remoteURL) { let promise = new CancellablePromise((resolve, reject) => { let requestConfig = { contentType: false, dataType: "json", processData: false, type: "GET", url: remoteURL }; AUI.$.ajax(requestConfig) .done((data) => resolve(data)) .fail((jqXHF, status, error) => reject(error)); }); return promise; } Take note that we are using the out of the box jQuery in Liferay. In Liferay we have sandboxed the jQuery(v2.1.4) in to AUI object. When you need to call jQuery method just use AUI.$.... And you are free to use your prefered jQuery as well without naming conflict.       Step 5 - Fulfill your promise Next we will write the UI logic based on the data from the web service. /** * Create Chart with data url * * @param {String} remoteURL * @protected */ createRemoteChart_(remoteURL, tradingYear) { this.getChartData_(remoteURL).then(data => { let chartcanvas = document.getElementById(this.portletNamespace + "monthly-trading-chart"); let labels = Array.from(data, d => d.month); let bgColor = this.getPreferedColors_(data.length, 0.3); let borderColor = this.getPreferedColors_(data.length, 0.8); let dataValue = Array.from(data, d => d.volume); let chartData = { labels: labels, datasets: [ { label: "Monthly Trade of " + tradingYear, backgroundColor: bgColor, borderColor: borderColor, borderWidth: 1, data: dataValue, } ] }; let options = { scales: { xAxes: [{ stacked: true }], yAxes: [{ stacked: true }] } }; let myBarChart = new Chart(chartcanvas, { type: 'bar', data: chartData, options: options }); }); } I have another method that lets the bar chart only uses my preferred color. /** * Get Bar background colors from prefered colors * @protected * @param {int} length * @param {string} opacity * @return {Array} a color array */ getPreferedColors_(length, opacity=1) { let colorsRepo = [ "255, 99, 132", "54, 162, 235", "255, 206, 86", "75, 192, 192", "153, 102, 255", "255, 159, 64" ]; let colors = new Array(); for (let i = 0; i < length; i++) { let index = i % colorsRepo.length ; let color = "rgba(" + colorsRepo[index] + "," + opacity + ")"; colors.push(color) } return colors; }   Hope you enjoy.   Runnable source code is in my github repo: https://github.com/neilking/liferay-samples/tree/master/sample-workspace/sample-liferay-workspace/modules/monthly-trading-web   Neil Jin 2017-05-13T04:18:04Z
Categories: CMS, ECM

Liferay IDE 3.1 Milestone 3 Released

Liferay - Mon, 05/08/2017 - 20:14

Hello all,

 

We are pleased to announce that we have pushed a new release of Liferay IDE 3.1 to the milestones updatesite. You can install the new release here as usual:

 

http://releases.liferay.com/tools/ide/latest/milestone

 

For full list of the bundles that includes Eclipse Neon 3 JavaEE package with Liferay IDE 3.1 M3 pre-installed:

 

https://www.liferay.com/downloads/liferay-projects/liferay-ide

This is the 3rd milestone release (See this blog entry for highlights of the 3.1 M2 release.)

Some release highlights include:

  • New Liferay JSF Project

    • Gradle

    • Maven

  • Liferay Maven Support

    • Liferay Maven Workspace Support

    • Liferay Maven Module Project Fragment

  • Improved Code Upgrade Tool

  • Miscellaneous bug fixes

 

New Liferay JSF Project

We now support for developing JSF projects for portal 7 through New Liferay JSF Project wizard, recognizing both gradle and maven based war projects, and deploying these projects to Liferay 7.

 

Liferay Maven Support

We now have new Maven support for new Liferay Workspace Project Wizard and new Liferay Module Project Fragment.

 

 

Next

We are going to release IDE 3.1 Beta 1 very soon and Liferay Workspace Installer and Liferay Developer Studio will be available.

 

Feedback

If you run into any issues or have any suggestions please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2017-05-09T01:14:46Z
Categories: CMS, ECM

Increasing Capacity and Decreasing Response Times Using a Tool You're Probably Not Familiar With

Liferay - Sun, 05/07/2017 - 23:29
Introduction

When it comes to Liferay performance tuning, there is one golden rule:

The more you offload from the application server, the better your performance will be.

This applies to all aspects of Liferay. Using Solr/Elastic is always better than using the embedded Lucene. While PDFBox works, you get better performance by offloading that work to ImageMagick and GhostScript.

You can get even better results by offloading work before it gets to the application server. What I'm talking about here is caching, and one tool I like to recommend for this is Varnish.

According to the Varnish site:

Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.

So I've found the last claim to be a little extreme, but I can say for certain that it can offer significant performance improvement.

Basically Varnish is a caching appliance.  When an incoming request hits Varnish, it will look at in it's cache to see if it has been rendered before. If it isn't in the cache, it will pass the request to the back end and store the response (if possible) in the cache before returning the response to the original requestor.  As additional matching requests come in, Varnish will be able to serve the response from the cache instead of sending it to the back end for processing.

So there are two requirements that need to be met to get value out of the tool:

  1. The responses have to be cacheable.
  2. The responses must take time for the backend to generate.

As it turns out for Liferay, both of these are true.

So Liferay can actually benefit from Varnish, but we can't just make such a claim, we'll need to back it up w/ some testing.

The Setup

To complete the test I set up an Ubuntu VirtualBox instance w/ 12G of memory and 4 processors, and I pulled in a Liferay DXP FP 15 bundle (no performance tuning for JVM params, etc). I also compiled Varnish 4.1.6 on the system. For both tests, Tomcat will be running using 8G and Varnish will also be running w/ an allocation of 2G (even though varnish is not used for the Tomcat test, I think it is "fairer" to keep the tests as similar as possible).

In the DXP environment I'm using the embedded ElasticSearch and HSQL for the database (not a prod configuration but both tests will have the same bad baseline). I deployed the free Porygon theme from the Liferay Marketplace and set up a site based on the theme. The home page for the Porygon demo site has a lot of graphics and stuff on it, so it's a really good site to look at from a general perspective.

The idea here was not to focus on Liferay tuning too much, to get a site up that was serving a bunch of mixed content. Then we measure a non-Varnish configuration against a Varnish config to see what impact Varnish can have in performance terms.

We're going to test the configuration using JMeter and we're going to hit the main page of the Porygon demo site.

Testing And Results

JMeter was configured to use 100 users and loop for 20 times.  Each test would touch on the home page, the photography, science and review pages and would also visit 3 article pages. JMeter was configured to retrieve all related assets synchronously to exagerate the response time from the services.

Response Times

Let's dive right in with the response times for the test from the non-Varnish configuration:

The runtime for this test was 21 minutes, 20 seconds. The 3 article pages are the lines near the bottom of the graph, the lines in the middle are for the general pages w/ the asset publishers and all of the extra details.

Next graph is the response times from the Varnish configuration:

The runtime for this test was 11 minutes, 58 seconds, a 44% reduction in test time, and it's easy to see that while the non-Varnish tests seem to float around the 14 second mark, the Varnish tests come in around 6 seconds.

If we rework the graph to adjust the y-axis to remove the extra whitespace we see:

The important part here for me was the lines for the individual articles. In the non-Varnish test, /web/porygon-demo/-/space-the-final-frontier?inheritRedirect=true&redirect=%2Fweb%2Fporygon-demo shows up around the 1 second response time, but with varnish it hovers at the 3 second response time.  Keep that in mind when we discuss the custom VCL below.

Aggregate Response Times

Let's review the aggregate graphs from the tests.  First the non-Varnish graph:

This reflects what we've seen before; individual pages are served fairly quickly, pages w/ all of the mixed content take significantly longer to load.

And the graph for the Varnish tests:

At the same scale, it is easy to see that Varnish has greatly reduced the response times.  Adjusting the y-axis, we get the following:

Analysis

So there's a few parts that quickly jump out:

  • There was a 44% reduction in test runtime reflected by decreased response times.
  • There was a measurable (but unmeasured) reduction in server CPU load since Liferay/Tomcat did not have to serve all traffic.
  • Since work is offloaded from Liferay/Tomcat, overall capacity is increased.
  • While some response times were greatly improved by using Varnish, others suffered.

The first three bullets are easy to explain.  As Varnish is able to cache "static" responses from Liferay/Tomcat, it can serve those responses from the cache instead of forcing Liferay/Tomcat to build a fresh response every time.  Having Liferay/Tomcat rebuild responses each time requires CPU cycles, so returning a cached response reduces the CPU load.  And since Liferay/Tomcat is not busy rebuilding the responses that now come from the cache, Liferay/Tomcat is free to handle responses that cannot be cached; basically the overall capacity of Liferay/Tomcat is increased.

So you might be asking that, since Varnish is so great, why do the single article pages suffer from a response time degradation? Well, that is due to the custom VCL script used to control the caching.

The Varnish VCL

So if you don't know about Varnish, you may not be aware that caching is controlled by a VCL (Varnish Configuration Language) file. This file is closer to a script than it is a configuration file.

Normally Varnish operates by checking the backend response cache control headers; if a response can be cached, it will be, and if the response cannot be cached it won't. The impact of Varnish is directly related to how many of the backend responses can be cached.

You don't have to rely solely on the cache control headers from the backend to determine cacheability; this is especially true for Liferay. Through the VCL, you can actually override the cache control headers and make some responses cachable that otherwise would not have been and make other responses uncacheable even when the backend says it is acceptable.

So now I want to share the VCL script used for the test, but I'll break it up into parts to discuss the reasons for the choices that I made. The whole script file will be attached to the blog for you to download.

In the sections below comments have been removed to save space, but in the full file the comments are embedded to explain everything in detail.

Varnish Initialization probe company_logo { .request = "GET /image/company_logo HTTP/1.1" "Host: 192.168.1.46:8080" "Connection: close"; .timeout = 100ms; .interval = 5s; .window = 5; .threshold = 3; } backend LIFERAY { .host = "192.168.1.46"; .port = "8080"; .probe = company_logo; } sub vcl_init { new dir = directors.round_robin(); dir.add_backend(LIFERAY); }

So in Varnish you need to declare your backends to connect to.  In this example I've also defined a probe request used to verify health of the backend.  For probes it is recommended to use a simple request that results in a small response; you don't want to overload the system with all of the probe requests.

Varnish Request sub vcl_recv { ... if (req.url ~ "^/c/") { return (pass); } if (req.url ~ "/control_panel/manage") { return (pass); } ... if (req.url !~ "\?") { return (pass); } ... }

The request handling basically determines whether to hash (lookup request from the cache) or pass (pass request directly to backend w/o caching).

For all requests that start with the "/c/..." URI, we pass those to the backend.  They represent request for /c/portal/login or /c/portal/logout and the like, so we never want to cache those regardless of what the backend might say.

Also any control panel requests are also passed directly to the backend. We wouldn't want to accidentally expose any of our configuration details now would we? 

Otherwise the code is trying to force hashing of binary files (mp3, image, etc) if possible and conforms to most average VCL implementations.

The last check of whether the URL contains a '?' character, well I'll be getting to that later in the conclusion...

Varnish Response sub vcl_backend_response { if (bereq.url ~ "^/c/") { return (deliver); } if ( bereq.url ~ "\.(ico|css)(\?[a-z0-9=]+)?$") { set beresp.ttl = 1d; } else if (bereq.url ~ "^/documents/" && beresp.http.content-type ~ "image/*") { if (std.integer(beresp.http.Content-Length,0) < 10485760 ) { if (beresp.status == 200) { set beresp.ttl = 1d; unset beresp.http.Cache-Control; unset beresp.http.set-cookie; } } } else if (beresp.http.content-type ~ "text/javascript|text/css") { if (std.integer(beresp.http.Content-Length,0) < 10485760 ) { if (beresp.status == 200) { set beresp.ttl = 1d; } } } ... }

The response handling also passes the /c/ type URIs back to the client w/o caching.

The most interesting part of this section is the testing for content type and altering caching as a result.  Normally VCL rules will look for some request for "/blah/blah/blah/my-javascript.js" by checking for the extension as part of the URI.

But Liferay really doesn't use these standard extensions.  For example, with Liferay you'll see a lot of requests like /combo/?browserId=other&minifierType=&languageId=en_US&b=7010&t=1494083187246&/o/frontend-js-web/liferay/portlet_url.js&.... These kinds of requests do not have the standard extension on it so normal VCL matching patterns would discard this request as uncacheable. Using the VCL override logic above, the request will be treated as cacheable since it is just a request for some JS.

Same kind of logic applies to the /documents/ URI prefix; anything w/ this prefix is a fetch from the document library.  Full URIs are similar to /documents/24848/0/content_16.jpg/027082f1-a880-4eb7-0938-c9fe99cefc1a?t=1474371003732.  Again since it doesn't end w/ the standard extension, the image might not be cached. The override rule above will match on all /documents/ prefix and content types of images and treat the request as cacheable.

Conclusion

So let's start with the easy ones...

  • Adding Varnish can decrease your response times.
  • Adding Varnish can reduce your server load.
  • Adding Varnish can increase your overall capacity.

Honestly I was expecting that to be the whole list of conclusions I was going to have to worry about. I had this sweet VCL script and performance times were just awesome. As a final test, I tried logging into my site with Varnish in place and, well, FAIL.  I could log in, but I didn't get the top bar or access to the left or right sidebars or any of these things.

I realized that I was actually caching the response from the friendly URLs and, well, for Liferay those are typically dynamic pages.  There is logic specifically in the theme template files that change the content depending upon whether you are logged in or not.  Because my Varnish script was caching the pages when I was not logged in, after I logged in the page was coming from the cache and the necessary stuff I needed was now gone.

I had to add the check for the "?" character in the requests to determine if it was a friendly URL or not.  If it was a friendly URL, I had to treat those as dynamic and had to send them to the backend for processing.

This leads to the poor performance, for example, on the single article display pages.  My first VCL was great, but it cached too much.  My addition for friendly URLs solved the login issue but now prevented caching pages that maybe could be pages so I swung too far again, but since the general results were still awesome I just went with what I had.

Now for the hard conclusions...

  • Adding Varnish requires you to know your portal.
  • Adding Varnish requires you to know your use cases.
  • Adding Varnish requires you to test all aspects of your portal.
  • Adding Varnish requires you to learn how to write VCL.

The VCL really isn't that hard to wrap your head around.  Once you get familiar with it, you'll be able to customize the rules to increase your cacheability factor without sacraficing the dynamic nature of your portal.  In the attached VCL, we add a response header for a cache HIT or MISS, and this is quite useful for reviewing the responses from Varnish to see if a particular response was cached or not (remember the first request will always be a MISS, so check again after a page refresh).

I can't emphasize the testing enough though.  You want to manually test all of your pages a couple of times, logged in and not logged in, logged in as users w/ different roles, etc., to make sure each UX is correct and that you're not bleeding views that should not be shared.

You should also do your load testing.  Make sure you're getting something out of Varnish and that it is worthwhile for your particular situation.

Note About SSL

Before I forget, it's important to know that Varnish doesn't really talk SSL, nor does it talk AJP.  If you're using SSL, you're going to want to have a web server sitting in front of Varnish to handle SSL termination.

And Varnish doesn't talk AJP, so you will have to configure for HTTP connections from both the web server and the app server.

This points toward the reasoning behind my recent blog post about configuring Liferay to look at a header for the HTTP/HTTPS protocols.  In my environ I was terminating SSL at Apache and needed to use the HTTP connectors to Varnish and again to Tomcat/Liferay.

Although it was suggested in a few of the comments that separate connections could be used to facilitate the HTTP and HTTPS traffic, etc., those options would defeat some of the Varnish caching capabilities. You'd either have separate caches for each connection type (or perhaps no cache on one of them) or other unforseen issues. Being able to route all traffic through a single pipe to Varnish will ensure Varnish can cache the response regardless of the incoming protocol.

Update - 05/16/2017

Small tweak to the VCL script attached to the blog, I added rules to exclude all URLs from /api/* from being cached.  Those are basically your web service calls and rarely would you really want to cache those details.  Find the file named localhost-2.vcl for the update.

David H Nebinger 2017-05-08T04:29:03Z
Categories: CMS, ECM

Revisiting SSL Termination at Apache HTTPd

Liferay - Fri, 05/05/2017 - 17:37

So I have a blog I created a long time ago dealing w/ Liferay and SSL. The foundation of that blog post was my Fronting Liferay Tomcat with Apache HTTPd post and added terminating SSL at HTTPd and configuring the Liferay instance running under Tomcat to use HTTPS for all of the communication.

If you tear into the second post, you'll find that I was using the AJP connector to join HTTPd and Tomcat together.

This is actually a key aspect for a working setup for SSL + HTTPd + Liferay/Tomcat.

Today I was actually working on a similar setup that used the HTTP connector for SSL + HTTPd + Liferay/Tomcat. Unauthenticated traffic worked just fine, but as soon as you would try to access a secured resource that required authentication, a redirect loop resulted with HTTPd finally terminating the loop.

The only info I had was the redirect URL, https://example.com/c/portal/login?null. There was no log messages in Liferay/Tomcat and repeated 302 messages in the HTTPd logs.

My good friend and coworker Nathan Shaw told me of a case he was aware of that was similar but was from Nginx; although different web servers, the 302 redirect loop on /c/portal/login?null was an exact match.

The crux of the issue is the setting of the company.security.auth.requires.https property in portal-ext.properties.

Basically when you set this property to true, you are saying that when a user logs in, you want to force them into the secure https side. Seems pretty simple, right?

So in this configuration, when a user on http:// wants to or needs to log in, they basically end up hitting http://.../c/portal/login. This is where a check for HTTPS is done and, since the connection is not yet HTTPS will issue a redirect back to https://.../c/portal/login to complete the login.

And this, in conjunction with the HTTP connector between HTTPd and Liferay/Tomcat, is what causes the redirect loop.

Liferay responds with the 302 to try and force you to https, you submit again but SSL terminates at HTTPd and the request is sent via the HTTP connector to Liferay/Tomcat.  Well, Liferay/Tomcat sees the request came in on http:// and again issues the 302 redirect. You're now in redirect loop hell.

Fortunately, this is absolutely fixable.

Liferay has a set of portal-ext.properties settings to mitigate the SSL issue. They are:

# # Set this to true to use the property "web.server.forward.protocol.header" # to get the protocol. The property "web.server.protocol" must not have been # overriden. # web.server.forwarded.protocol.enabled=false # # Set the HTTP header to use to get the protocol. The property # "web.server.forwarded.protocol.enabled" must be set to true. # web.server.forwarded.protocol.header=X-Forwarded-Proto

The important property is the first one.  When that property is true, Liferay will ignore the protocol (http vs https) of the incoming request and will instead use a request header to see what the original protocol for the request actually was.

The header name can be specified using the second property, but the default one works just fine. It's also how you google for an answer for your particular web server.

I'll save you the trouble for Apache HTTPd; you just need to add a couple of lines to your <VirtualHost /> elements:

<VirtualHost *:80> RequestHeader set X-Forwarded-Proto "http" ... </VirtualHost> <VirtualHost *:443> RequestHeader set X-Forwarded-Proto "https" ... </VirtualHost>

That's it.

For every incoming request getting to HTTPd, a header is added with the request protocol.  When the ProxyPass configuration forwards the requests to Liferay/Tomcat, Liferay will use the header for the check on https:// rather than the actual connection from HTTPd.

Some of you are going to be asking

Why are you using the HTTP connector to joing HTTPd to Liferay/Tomcat anyway? The AJP connector is the best connector to use in this configuration because it is better performing than the HTTP connector and avoids this and other issues that can happen by using the HTTP connector.

You would be, of course, absolutely right about that. For a simple configuration like this where you only have HTTPd <-> Liferay/Tomcat, using the HTTP connector is frowned upon.

That said, I've got another exciting blog post in the pipeline that will force moving to this configuration... I'm not getting into any details at this point, but suffice it to say that when you see the results that I've been gathering, you too will be looking at this configuration too.

David H Nebinger 2017-05-05T22:37:36Z
Categories: CMS, ECM

The State of Digital Transformation in Financial Services

Liferay - Thu, 05/04/2017 - 17:46

Digital transformation is often referred to as a sort of road. The beginning is uphill and it’s hard to get started. Later, you see obstacles ahead blocking the way. When a finish line appears in the distance, a new road appears beyond that. But setting the metaphor aside, the point is that digital transformation is critical for the demands of customer experience. Your company must start traveling down the road, and the sooner you do, the better. With research from Liferay, WBR Digital created a report to explore the financial industry-wide transformation taking shape. Based on the findings, we’ve illustrated survey data, sampled from digital leaders in banking, to show what the landscape looks like, where the competition stands and how to jump the hurdles waiting ahead.

Getting Started

It’s no surprise that an overwhelming majority of digital leaders at banks recognize the intrinsic role technology plays in digital transformation. What’s alarming is the 56% of respondents that claim access to IT resources as an obstacle. Thus, a cycle emerges. Stakeholders are aware of the significant investment that must be available for effective transformation, but it is difficult to get approval for an appropriate, multi-year budget. Business executives and IT leaders must work together to prioritize a digital transformation strategy that is owned across departments.

The Journey Ahead

Especially for large enterprises, updating technology takes time. According to banking leaders surveyed, 49% of companies consider their strategy halfway completed, 37% of financial institutions are beginning to roll out their strategy and only 13% of respondents report completing (or being close to completing) their digital transformation strategy. While these mile-markers are somewhat subjective, companies past the imaginary midpoint have likely created a strategy and are integrating existing with new technology. Banks are building their digital strategy with digital experience platforms that can address immediate problems and prepare for scalable growth.

See also: What is digital transformation? 

Obstacles en Route

The challenges facing banks along this journey are largely operational. This includes lacking any central repository for customer information, or a problematic volume of unstructured data (and the inability to contextualize customers and their needs, because of disorganized data). The good news is that understanding these barriers makes it easier to overcome them. While no specific obstacle takes the lion’s share of the challenge, some impact revenue more than others. In order of agreement, these are the top three challenges for banks seeking an omnichannel customer experience: availability of IT resources, regulatory compliance, fragmented data resources. It’s interesting that almost the same percentage of financial businesses who consider their strategy halfway completed are also taking the time to create continuity across all customer touchpoints. This will ensure success in the immediate and long-term, as customers prefer companies with interconnected digital experiences.

Are We There Yet?

Transformation requires vision. And vision requires strategy. In order to build momentum, respondents identified that to execute such a strategy, banks must build a system for better collaboration. Creating a cross-departmental digital team and promoting a flexible culture are essential when preparing for operational change. It’s no longer applicable to rely on marketing for everything related to customers. In agile, the customer comes first, no matter what team you’re on.

For more information on digital transformation facing financial institutions today, read the report.

 

William Jameson 2017-05-04T22:46:28Z
Categories: CMS, ECM

How to Create Customer Surveys That Actually Tell You Something

Liferay - Thu, 05/04/2017 - 11:00

Customer surveys are a tried and true method of peeking inside the minds of both existing and potential clients so that companies in turn can better serve customers. While these question and answer forms may have their roots in printed-out sheets and face-to-face interactions, the customer survey has remained viable and even more accessible in the age of digital business.

Thanks to online platforms, customer surveys can be distributed to a larger audience than ever. As such, businesses have been given the opportunity to receive large amounts of detailed feedback that can help to better shape their business and digital marketing strategies. By receiving honest, insightful answers from customers through digital surveys, companies can receive crucial takeaways, including a better understanding of client behavior, pain points and customer journeys.

However, great opportunities require careful planning. The best questionnaires lead to eye-opening survey data and improved services that may not have been possible without the feedback of target audiences.

If you are searching for ways to improve your surveys so that you are not only making the most of your opportunity but also better serving your client base, consider the following five customer survey guidelines. You may find that even the smallest improvement can cause more useful answers and improved long-term results.

5 Guidelines to Improve Your Customer Surveys

Surveys take time, effort and investment to get them to audiences. But when their questions are flawed and their deployment misses the mark, the results could be disheartening and even lead to incorrect takeaways that affect your future strategies. The following five guidelines can be implemented in the creation and release of future questionnaires to improve survey experience, receive honest feedback and prevent the pitfalls frequently experienced in survey creation, including letting your own bias influence results and failing to target the right audience for your specific needs.

1. Define and Address Your Audience – Defining your audience is the first step that every successful survey needs to take to make the most of the opportunity at hand. While questions posed by your team may have sparked the need for a survey, determining who exactly is answering those questions will be the only way to receive usable survey data. According to Vertical Response, segmenting the audience may be right for your specific survey goals. Even if you have a large database of contacts, cutting them down to a group based on product purchases, geographic location, industry or another definable characteristic may be the most accurate way to get the specific feedback you want.

2. Remove Leading Questions for More Honest Answers - It’s easy to let your bias and hopes result in phrasing questions that lead survey takers toward specific answers. As detailed by Help Scout, the best surveys lead to honest feedback and perspectives that you may have been unaware of prior to the survey. After you have completed your questions, attempt to look at them from a more objective point of view. Do your questions contain words that ascribe some sort of value to the subject? For example, “How have you enjoyed our new and improved online features?” pushes toward only positive responses. Instead ask, “What is your opinion of our recent update to online features?” This remains far more neutral in tone. Stay away from assumptions and pre-supposed facts and keep your questions as straightforward as possible, while still encouraging smart, open-ended answers.

3. Align Questions with Your Survey Goals – Every survey should be created with the intent to reach several goals. Typically, these goals concern creating survey data used to improve services, evolve digital marketing efforts and understand opinions on your brand. Useful and successful surveys constantly drive toward this goal and every question should be in service of fulfilling this. As discussed by Help Scout, goals should be definable in two to three sentences to keep your questions on point and consistently directed toward what you are interested in discovering. An added benefit is that your survey can be made much shorter while still remaining effective. Research reported on by Client Heartbeat has shown that shorter surveys, typically under 10 questions, have a much lower abandonment rate.

4. Move from Simpler to More Complex Questions - A survey should be structured in a way that guides readers deeper and toward more complex questions. This is accomplished by arranging questions in a way that encourages more robust, informative answers after first grabbing them with simpler questions. Think of your question layout as warming up the survey taker, which accustoms them to your questionnaire and encourages them to become more invested in completing all the questions. Putting complex questions too early within the survey could cause them to drop out or rush through their answers, leading to poor, unusable results. According to research from Bain Insights, the lower the response rate, the more random the survey data. So making your survey more likely to be completed is crucial in trusting your results.

5. Use Tools That Enable Greater Customization and Analysis – There are many potential tools that can be used when creating an online survey. But which one is right for you? Major advances in form creation in recent years include collecting data on dropout rates for each question so that these problematic sections can be eliminated. Also, advanced survey forms can be customized to dynamically change questions based on how a person identifies him/herself earlier in the survey, such as being a first-time loan applicant versus a renewal. While making sure that your questionnaire is easy to read and complete is essential, it should also be customizable for your unique needs. A successful customer survey will be flexible enough to accommodate both simple, multiple choice questions and more open-ended, text-based answers to encourage useful, detailed information from audiences. In doing so, businesses can completely incorporate the previous four guidelines into their new survey.

Make the Most of Your Customer Surveys

Customer surveys are just one way in which businesses can better understand and serve their client base. Learn more about how to strategically improve your customer experiences and meet the needs of target audiences to stay ahead of today’s trends.

Read Four Strategies to Transform Your Customer Experience

Matthew Draper 2017-05-04T16:00:45Z
Categories: CMS, ECM

Tomcat+HikariCP

Liferay - Wed, 05/03/2017 - 10:08

In case you aren't aware, Liferay 7 CE and Liferay DXP default to using Hikari CP for the connection pools.

Why?  Well here's a pretty good reason:

Hikari just kicks the pants of any other connection pool implementation.

So Liferay is using Hikari CP, and you should too.

I know what you're thinking.  It's something along the lines of:

But Dave, we're following the best practice of defining our DB connections as Tomcat <Resource /> JNDI definitions so we don't expose our database connection details (URLs, usernames or passwords) to the web applications.  So we're stuck with the crappy Tomcat connection pool implementation.

You might be thinking that, but if you are thankfully you'd be wrong.

Installing the Hikari CP Library for Tomcat

So this is pretty easy, but you have two basic options.

First is to download the .zip or .tar.gz file from http://brettwooldridge.github.io/HikariCP/.  This is actually a source release that you'll need to build yourself.

Second option is to download the built jar from a source like Maven Central, http://central.maven.org/maven2/com/zaxxer/HikariCP/2.6.1/HikariCP-2.6.1.jar.

Once you have the jar, copy to the Tomcat lib/ext directory.  Note that Hikari CP does have a dependency on SLF4J, so you'll need to put that jar into lib/ext too.

Configuring the Tomcat <Resource /> Definitions

Location of your JNDI datasource <Resource /> definitions depends upon the scope for the connections.  You can define them globally by specifying them in Tomcat's conf/server.xml and conf/context.xml, or you can scope them to individual applications by defining them in conf/Catalina/localhost/WebAppContext.xml (where WebAppContext is the web application context for the app, basically the directory name from Tomcat's webapps directory).

For Liferay 7 CE and Liferay DXP, all of your plugins belong to Liferay, so it is usually recommended to put your definitions in conf/Catalina/localhost/ROOT.xml.  The only reason to make the connections global is if you have other web applications deployed to the same Tomcat container that will be using the same database connections.

So let's define a JNDI datasource in ROOT.xml for a Postgres database...

Create the file conf/Catalina/localhost/ROOT.xml if it doesn't already exist.  If you're using a Liferay bundle, you will already have this file.

Hikari CP supports two different ways to define your actual database connections.  The first way is the one that they prefer and it's based upon using a DataSource instance (more standard way of establishing a connection with credentials) or the older way using a DriverManager instance (legacy way that has different ways of passing credentials to the DB driver).

We'll follow their advice and use the DataSource.  Use the table from https://github.com/brettwooldridge/HikariCP#popular-datasource-class-names to find your data source class name, we'll need it when we define the <Resource /> element.

Gather up your JDBC url, username and password because we'll need those too.

Okay, so in ROOT.xml inside of the <Context /> tag, we're going to add our Liferay JNDI data source connection resource:

<Resource name="jdbc/LiferayPool" auth="Container" factory="com.zaxxer.hikari.HikariJNDIFactory" type="javax.sql.DataSource" minimumIdle="5" maximumPoolSize="10" connectionTimeout="300000" dataSourceClassName="org.postgresql.ds.PGSimpleDataSource" dataSource.url="jdbc:postgresql://localhost:5432/lportal" dataSource.implicitCachingEnabled="true" dataSource.user="user" dataSource.password="pwd" />

So this is going to define our connection for Liferay and have it use the Hikari CP pool.

Now if you really want to stick with the older driver-based configuration, then you're going to use something like this: <Resource name="jdbc/LiferayPool" auth="Container" factory="com.zaxxer.hikari.HikariJNDIFactory" type="javax.sql.DataSource" minimumIdle="5" maximumPoolSize="10" connectionTimeout="300000" driverClassName="org.postgresql.Driver" jdbcUrl="jdbc:postgresql://localhost:5432/lportal" dataSource.implicitCachingEnabled="true" dataSource.user="user" dataSource.password="pwd" /> Conclusion

Yep, that's pretty much it.  When you restart Tomcat you'll be using your flashy new Hikari CP connection pool.

You'll want to take a look at https://github.com/brettwooldridge/HikariCP#frequently-used to find additional tuning parameters for your connection pool as well as the info for the minimum idle, max pool size and connection timeout details.

And remember, this is going to be your best production configuration.  If you're using portal-ext.properties to set up any of your database connection properties, you're not as secure as you can be.  Remember, a hacker needs information to infiltrate your system; the more details of your infrastructure you expose, the more info you give a hacker to worm their way in.  Using the portal-ext.properties approach, you're exposing your JDBC URL (so hostname and port as well as the DB server type) and the credentials (which will work for DB login but sometimes they might also be system login credentials).  This kind of info is worth its weight in gold to a hacker trying to infiltrate you.

So follow the recommended practice of using your JNDI references for the database connections and keep this information out of the hackers hands.

 

David H Nebinger 2017-05-03T15:08:57Z
Categories: CMS, ECM

Liferay Roadshow 2017, soluciones simples para retos complejos

Liferay - Wed, 05/03/2017 - 02:19

Como cada primavera, desde Liferay apoyamos a nuestros partner para hacer un roadshow por diferentes ciudades de la geografía española. Estos roadshows son encuentros de una mañana de duración donde los asistentes (clientes y usuarios finales) pueden conectar de forma personal, cara a cara, con nuestro equipo de Liferay y de los partners. Un entorno distendido en el que se exponen historias reales de clientes, presentaciones y sesiones innovadoras sobre el uso de la plataforma de Liferay.

¿Qué puedo esperar en el Roadshow?

Cada jornada tiene su propia personalidad y agenda pero si tienen un hilo conductor común: la innovación y la transformación digital.

La transformación digital ya no es una opción, es un realidad para aquellas empresas y organizaciones que quieren competir en el actual panorama empresarial. Una transformación que abarca desde los procesos de negocio hasta los modelos de liderazgo con la tecnología como medio para la gestión de la información y de los datos. Una herramienta clave para estar preparado y para afrontar la transformación es la innovación. De cara a conseguir la innovación es necesario contar con la tecnología adecuada que nos ayude a evolucionar las estrategias digitales tan demandadas actualmente.

Pero además, la tecnología elegida tiene que ayudar a transformar tanto los procesos internos, facilitando la compartición de información, el trabajo y los entornos digitales de nuestros empleados, como los procesos externos, ofreciendo experiencia únicas y omnicanal para nuestros usuarios y clientes. Desde Liferay trabajamos en Liferay Digital Experiencia Platform (Liferay DXP) como plataforma que aúna las capacidades de cambio operacional interno con las experiencias digitales para servir a los clientes y acelerar la innovación.

A lo largo de cada roadshow se exploran los pilares clave de Liferay DXP y cómo ayudan a que los negocios respondan a las necesidades cambiantes de los mercados desde diferentes tipos de sesiones del tipo:

  • Cómo acelerar la transformación con Liferay DXP
  • Experiencias y casos reales con la plataforma de Liferay
  • Demostración práctica
  • Sesiones de innovación
¿Dónde son los roadshows?

Madrid, Barcelona, Sevilla y Bilbao con partner somo everis, mimacom y Zylk son las ciudades y partners confirmados. Puedes encontrar más información sobre las fechas y las agendas aquí.

No te lo pierdas, únete a nosotros en el Roadshow 2017 ‘Transformación Digital Real: Soluciones simples para retos complejos’ y asegúrate de que estás preparado para el futuro y puedes llevar tu negocio al siguiente nivel de éxito.

Madrid 26/04    |    Barcelona 10/05    |    Bilbao 18/05    |    Sevilla 24/05    |    Barcelona 15/06

Javier Puga 2017-05-03T07:19:52Z
Categories: CMS, ECM

Meet the First Registrant of Liferay Symposium North America 2017!

Liferay - Tue, 05/02/2017 - 12:51
Relationship building and new content keep this six-time repeat attendee coming back.

This October, Liferay Symposium North America (LSNA) is headed to Austin! The Liferay Symposium is a gathering for Liferay users, developers, business influencers and thought leaders to stay current on best practices with Liferay and get fresh insights on customer experience and digital transformation. We took a moment to interview one of our first registrants to discover what he’s looking forward to at this year’s Symposium and why he keeps coming back. Meet David Weitzel, a contractor at Blue Cross Blue Shield Association. In addition to building relationships with his own team and others, Weitzel is excited for updates on new Liferay features, best practices and OSGi.

David Weitzel, Contractor, Blue Cross Blue Shield Association

David Weitzel, Contractor, Blue Cross Blue Shield Association How many Liferay Symposiums have you attended?

I started attending the LSNA with the East Coast Symposium in Leesburg, VA, in 2011. I’ve missed just one since then.

What keeps you coming back?

The relationships are really important. You don’t really get any other chance to spend two days catching up on what’s going on with business leaders and other consultants. I think it’s been a real strength that the senior leadership of Liferay hasn’t changed [over the last 10 years]. And at Symposium it’s valuable to get a little time with [the CEO,] Bryan Cheung.

When I first started attending it was just me, then it was myself and a colleague, then the two of us and our boss. This year it’s exciting that a small team of us will get to attend. The mix of technical and business solutions experience makes it a compelling proposition, and to get the most from it you need a plan to cover multiple sessions and compare notes.

What are you looking for at Liferay Symposium 2017?

As a Liferay subject matter expert (my card says independent Liferay consultant), I have to keep up with the technology, best practices, future directions and its applications. All of which you can do [at this event]—in 48 hours!

This year in particular I’ll be looking out for more best practices in operations and maintenance, and DXP experiences in particular. Hopefully the workshops will not clash with other important sessions, although the course files and notes make for useful reading after the Symposium.

How are you using Liferay in what you do?

My job goes in fits from consulting, training, design and developing. I’ve done a wide range of tasks, from major business application developments to supporting other customer systems. The overall technical documentation, community forums and the helpfulness of Liferay experts has been very motivational. When I started looking for a replacement to our community system a few years ago, I gave a 10-minute talk [in support of Liferay] in San Francisco.

Today, I focus more on custom application development. The development environment is constantly evolving and it’s hard to keep up with it unless you’re always developing.

How does Liferay help you address customer experience challenges?

The core out-of-the-box capabilities are easy to expand. It’s easy to roll out new functions to users as we move from “content only” towards more collaboration. The basic portal capabilities to mix portlets of different types in one layout is powerful. Personalization is a key theme of new websites, Liferay already has many tools available to deliver this out of the box.

What trends or tools in new technology are you most excited about right now?

Lately I’ve been trying to get my head around OSGi and how it impacts MVC portlets design, along with being easier to replace or extend core Liferay code. Ext [plugins] were always avoided at all costs, but hooks for custom listeners were easy to create in Liferay. Hopefully this is as easy [to do] in OSGi.

Two days before this year’s Symposium, Texas plays Oklahoma at home. Do you have a pick?

I’m British so I never went to college in the U.S., but I do love Saturday football! As a Minnesota Vikings fan, I would say Oklahoma—since they gave us Adrian Peterson!

Interested in the event? Register Now! William Jameson 2017-05-02T17:51:10Z
Categories: CMS, ECM

Call for Proposals: Liferay Symposium North America 2017

Liferay - Tue, 05/02/2017 - 12:24

Each year, we open the stage for Liferay users to share their expertise in business and technology. We’re pleased to accept proposals for this year’s agenda at Liferay Symposium North America, taking place in Austin, Texas from October 16 to 17.

What We’re Looking For

We're looking for exciting presenters to speak to their experience using Liferay in customer experience, digital transformation, connecting technologies, building applications and other use cases. Thought leaders will discuss new and inventive ways to maximize Liferay investments amidst rapid digital transformation and evolving customer needs. Please note that topics aren’t limited to what has been listed, so feel free to get creative!

Examples from LSNA 2016

We select the best proposals from a wide range of topics. For example, at Liferay Symposium North America 2016 we heard from Mount Sinai Health System about building a foundation for a healthcare transformation. We heard from COACH, Inc. on digitizing communication across more than 1,000 stores. And, fittingly, CopaAir shared their story of migrating to the cloud.

Where Do I Start?

Not only is this a great chance to share your work with others in your industry, but staying true to the open source tradition, it’s a great way to help others facing similar issues and opportunities to grow their businesses. If you’re not sure where to begin, consider your challenges. How has Liferay impacted your business and your customers? How did IT and business leaders address a problem to create a high-value business solution? For more information, and to apply, click here. As always, don’t hesitate to reach out if you have any questions. We’re looking forward to hearing from you!

William Jameson 2017-05-02T17:24:30Z
Categories: CMS, ECM

How to Build Customer Loyalty in the Digital Age

Liferay - Tue, 04/25/2017 - 12:02

The age of digital transformation has helped companies better understand and connect with their target audiences, with everything from dynamic content to page behavior insights helping to create a better picture of how individuals interact with companies. However, it has also greatly affected how loyal customers are to any given company.

Studies show that customers are more likely than ever to jump to competitors when they become dissatisfied with their current services, no matter how long they may have had a relationship with a company. Research from Vision Critical has found that 42 percent of Americans will stop shopping with a brand after only two bad experiences, making consistent high-quality customer experience critical in customer retention. While that lack of loyalty may mean ample opportunities for companies looking to expand their clientele, Harvard Business Review research shows that it costs approximately seven times more to gain a new customer than it does to retain one. As such, cultivating customer loyalty has both reputational and financial benefits.

But the question remains, how does a company improve customer loyalty in an age where loyalty is in short supply?

Encouraging Customer Loyalty Through Good Customer Experience

Changes in modern customer loyalty can be seen as an outcome of digital transformation, with more services than ever made convenient and easily accessible online. However, today’s customer often takes greater advantage of these online opportunities than the companies themselves, leading to today’s drop in customer loyalty. One of the largest factors in constantly shifting customer loyalty in the digital age is customer experience. Studies show that while pricing and quality of products may play a part in why a customer chooses one company over another, customer experience (CX) is the most important aspect in his or her choice.

The term customer experience can be applied to any interaction that a potential client has with your company, but there are several specific areas that can have the largest impact on loyalty. Brands can fight back against the waning tide of customer loyalty and its impact on client retention by improving the following areas of customer experience.

Ease of Access

Existing and potential clients should have the ability to quickly and completely reach your company’s services whenever and wherever they want. Today, customers expect to find and receive the online services they want without complications or delays. Without true brand loyalty, making your services easily accessible can make a major difference during a potential customer’s split-second choice between your company or a competitor.

Pre-existing loyalty may cause an existing client to go to you first, but not being able to quickly find/receive the services they want will easily send them to your competitor. Companies should consider how to implement omnichannel experiences in their services. In doing so, target audiences can smoothly and quickly interact online in both desktop and mobile, as well as in person, for a seamless experience that pushes them consistently and naturally toward closing a sale.

Supply Helpful Customer Service

The field of customer service is one of the most memorable interactions between your business and its customers. Customer service can include free shipping on items, customer loyalty rewards programs, return policies, promotional offers and customer support with issues concerning a product. According to research from Harris Interactive, 62% of U.S. consumers have switched brands in the past year due to a poor customer service experience. Good customer service not only reinforces to clients that your company cares about them, but prevents one of the biggest reasons for customer drop-off.

No matter the industry, customer service plays a crucial role in representing your brand in what are often the most decision-influencing interactions in any customer journey. Successfully demonstrating your dependability during these times can have a major positive effect on customer loyalty.

Distinguishing Your Brand Identity

Customers will tie your brand to the customer experience you provide. Should you offer a great experience, customers will attach positive feelings to your brand, but provide poor experiences and these failings will be tied to the brand instead. As such, it’s crucial that customer experiences align with your company’s larger goals so that good experiences not only gives clients a positive memory, but improve your brand’s standing in the public consciousness. For example, Amazon Dash buttons, which allow customers to reorder a product with the single push of a button, distinctly feature the brand of the company. In doing so, customers tie the brand to the simple, successful and satisfying experience they have had in using the button.

Forrester’s Customer Experience Index has found that a customer’s emotional connection with a brand has some of the strongest influence on loyalty. Cultivating that emotional connection and making it a positive one will yield short- and long-term loyalty in an age that has more competitors than ever before. In a sea of products and services from more brands than ever, having a positive emotional tie will help your brand distinguish itself from the crowd and feel less replaceable to clients.

Loyalty in the Digital Age: 4 Strategies to Engage Existing Customers

Learn about more strategies for new digital technologies for data gathering and digital experience delivery in order to better understand your customer and continue building loyalty in the midst of massive digital strategy turmoil.

Read the White Paper

Matthew Draper 2017-04-25T17:02:38Z
Categories: CMS, ECM

Why Portals Are Becoming Digital Experience Platforms, According to the Gartner Magic Quadrant for Horizontal Portals

Liferay - Mon, 04/24/2017 - 18:36

In a digital era in which the ability to provide superior customer experiences has become a competitive differentiator, portals and the things they’re really good at--personalizing experiences, drawing information from many sources across an enterprise, integrating existing systems – have gained new relevance. Portal technology has emerged as an effective way for businesses to take advantage of new ways of doing business and capturing the often slippery attention of today’s digitally savvy, self-educating customers.

Last year, Liferay took our flagship portal technology and the best parts of our portfolio and introduced Liferay Digital Experience Platform (DXP). The move was both a response to changing market demands and opportunities in the new digital landscape as well as a much-needed step to move our identity closer to the reality of what the Liferay platform had become for today’s businesses. Liferay had not been “just a portal” for quite some time, but rather a platform for creating and managing differentiated user experiences across channels, devices and types of users (e.g., customers, partners, employees).

A few months later, the 2016 Gartner Magic Quadrant for Horizontal Portals published with what seemed to be news of an official market phenomenon – portals were becoming digital experience platforms. The authors write: “The primary catalyst for change in the horizontal portal market is the response to digital business transformation: the evolution of traditional portal into the digital experience platform.” The report predicts that portals will be more comprehensive in what they’re able to do for businesses than in the past due to the demands of digital transformation, and this will lead to a growth rate increase in portal and digital engagement tech by about 5 percent over the next five years.

With these factors in play, the MQ evaluated and positioned 16 vendors, which included largely a mix of CMS- and portal-heritage companies. For the seventh year in a row, Liferay is positioned as a Leader in the report, which the analysts determine through measurements on our completeness of vision and ability to execute. Other Leaders include IBM, Microsoft, Oracle, Salesforce and SAP. Liferay has had a long history of inclusion in the MQ for Horizontal Portals, first entering in 2008 as a Visionary (bottom, right of the quadrant). Since then, we’ve been able to surpass all vendors over the years in achieving the highest and furthest position in the quadrant, except IBM.

The following table of the 16 vendors included in the Magic Quadrant by their software primary origins underscores the blurring in distinction between traditional software categories as vendors expand their products’ capabilities to answer the needs of digital business. Portals of today are no longer just a portal (an aggregator of personalized content, data and processes through a single point of access) and a CMS is no longer just a CMS. Liferay, for example, offers a platform that is flexible, based on modern architecture, serves the needs of business and IT, and supports continuous, omnichannel experiences across mobile, web and smart devices. This is all in the name of equipping digital businesses with a single platform to glue together digital experiences across the enterprise.

Portal Heritage CMS Heritage CRM Heritage ERP Heritage Backbase CXP Adobe Experience Manager Salesforce Community Cloud SAP Episerver Drupal     IBM WebSphere Hippo CMS     Jahia Kentico     Liferay DXP Sitecore     Microsoft SharePoint Squiz     OpenText       Oracle WebCenter      


As portals gain new importance in today’s digital environment, sometimes in the form of DXPs, digital technologies and changing customer expectations are leading to fresh applications of long-standing portal platform strengths. Liferay has excelled in tying together multiple systems, such as a CRM or ERP, through back-end integration, and for today’s digital businesses, this integration is crucial to connect systems and the people using them to share customer data and work as one connected business to serve customers. As a flexible, modern development platform, Liferay DXP prepares businesses to take quick advantage of new digital opportunities born everyday and emerging in the not too distant future. And the Liferay platform’s strength in building personalized sites for customer self service and onboarding is an opportunity for digital businesses to create a seamless customer experience from anonymous user (public websites) to registered customer (customer self-service portal, customer onboarding site) to lifelong fan.

As a longtime leader in portal technology, and now in applying that technology for the benefit of digital businesses, we recommend companies planning and undergoing digital transformation to take a serious look at including a modern, portal-based platform such as Liferay DXP in their technology stacks to tie together digital experiences across their enterprises.

Learn More About Critical Capabilities for Horizontal Portals

Take a deeper dive into how Gartner differentiates the top portal vendors, including Liferay, based on core capabilities and portal use cases in the Gartner Critical Capabilities for Horizontal Portals report.

Rebecca Shin 2017-04-24T23:36:47Z
Categories: CMS, ECM

Take Liferay’s Digital Business Survey

Liferay - Mon, 04/24/2017 - 14:09

The age of digital transformation is impacting every industry and companies both small and large. But understanding your business’ standing in this era of massive change can be difficult without knowing how you stack up against your competitors. Take the following short survey on digital business priorities and customer experience to discover insights from your industry peers.

All respondents will receive a free report of the survey results in early summer, which will help you better understand and take advantage of the latest trends in digital business.

Take the Survey Now

  Matthew Draper 2017-04-24T19:09:31Z
Categories: CMS, ECM

Why Back-End Integration Should Be Every Marketer’s Goal

Liferay - Thu, 04/20/2017 - 13:46

The marketing strategies of companies across all types of industries are often focused on front-end priorities, such as implementing campaigns, shaping brand awareness through public relations, pushing demand generation, improving customer interactions with websites and more. However, today’s most successful front-end marketing campaigns pivot and improve based on back-end data management and large amounts of collected information that measure success.

The modern age of digital transformation is changing how marketers successfully generate and manage leads, with back-end integration being a key component of campaigns that fully embrace transformation.

What is back-end integration? It is the process in which the front end of marketing strategies is fully connected to the technological infrastructure that stores a company’s information used by marketing, such as customer data management technology, campaign performance analytics, and interdepartmental data transfer. This allows for a flow of information between both sides of marketing that helps to continually strengthen campaigns by honing in on what works and eliminating what does not. While it may seem like keeping these two entities connected to one another is a natural part of modern marketing, countless companies are affected by massive shortcomings in the ways front-end and back-end marketing are integrated.

It may be that shortages on time, resources, and awareness have led your company to not fully embrace integration through digital transformation, but those who are integrating front-end and back-end systems have seen its true potential in evolving their marketing strategies.

The Marketing Benefits of Back-End Integration

So what does digital transformation mean for marketers? Successful back-end integration has numerous benefits, including:

1. Usable Data - Back-end integration means much more data on customer interactions to collect and analyze when determining the effectiveness of an online digital marketing strategy, as each front-end interaction will be tracked.

2. Seamless Improvements for Users - In connecting front-end user experiences with back-end data collection, marketers can create a seamless system of evaluating online client interactions and turning them into measurable analytics that help empower a company's workforce. The process involves in-depth analysis and a constant commitment to improvement, but front-end interactions will seem effortless and organic to customers.

3. Focused Marketing Improvements - By improving back-end processes, businesses can find key takeaways on how potential clients are interacting with marketing material. Marketing teams that solely focus on front-end marketing and design or rely on outdated methods of client data analysis can lead to static websites that gain little valuable data from visitors, no matter how many may come to the site. In doing so, refocusing marketing efforts can be like trying to hit a target blindfolded. As discussed by Entrepreneurs-Journey.com, connecting a marketing organization's front-end with back-end data, technology and processes is integral in driving predictable demand generation and strategic customer acquisition.

4. Greater Marketing ROI - Through the use of back-end integration, marketing departments can create detailed cause-and-effect data concerning their interactions with potential clients. In doing so, companies can decrease their cost per opportunity while also increasing lead-to-customer velocity, according to reports by Integrate. Combined, these two forms of digital transformation have a substantial total impact on return on investment (ROI), as well as on optimizing a marketing department’s usage of client data.

Equally Improving Front-End and Back-End Marketing

In today's world of fast-paced online marketing and the need to keep pace with competitors, some businesses may see marketing as a matter of front-end versus back-end when it comes to matters of where to invest both time and money. But prioritizing one over the other means that neither front-end nor back-end can truly function to their greatest degree possible, as these systems truly rely on one another in order to make the most of their functions.

It is crucial for teams to understand that while not every individual marketer needs to work in both front-end and back-end integration, these two sides must be highly interconnected, as discussed by SorryforMarketing.com. While having both a front-end and back-end team means that they can tackle different goals in the day to day, the use of integrated systems and a cohesive team view means both sides of marketing can adapt to changing trends and make better use of the data they receive through customer insight.

When evaluating how to better integrate your company’s marketing efforts, take the time to see where your shortcomings lie. Integration is only successful when your back-end systems are collecting accurate data and front-end strategies properly understand what that data means for their marketing efforts. A skilled back-end integration manager will help to balance these two sides and oversee proper and consistent integration that creates dynamic digital marketing strategies, which adapt to new client data for more effective campaigns.

The road from first interaction with a visitor online to closing a sale with a lead can be a long and winding one. If a marketing team is unable to see what that road actually involves, then all manner of incorrect assumptions can be made. Take the guesswork out of your digital marketing. Know your customer. Know your campaign. Embrace back-end integration and begin seeing the full picture of your company’s audience.

Learn more by reading “Why Your Marketing Technology Isn’t Impacting the Bottom Line.”

How Can Liferay Help You Integrate More Fully?

Find out how Liferay Digital Experience Platform creates personalized experiences to make the most out of back-end integration marketing efforts.

Learn About Digital Transformation

Matthew Draper 2017-04-20T18:46:49Z
Categories: CMS, ECM
Syndicate content