AdoptOS

Assistance with Open Source adoption

ECM

New Installers and IDE 3.2.0 Milestone 1 Released

Liferay - Wed, 04/18/2018 - 01:04

New Installers Released

 

Hello all,

 

We are pleased to announce a new release of Liferay Project SDK 2018.4.4 installer, Liferay Project SDK with Dev Studio Community Edition installer and Liferay Project SDK with Dev Studio DXP Installer.

 

New Installers:

 

New installers requires Eclipse Oxygen at least. For customers, they can download all of them on the customer studio download page.

 

Same as the previous 3.1 GA release, the installer is the full fledged Liferay Developer Studio installer which installs Liferay workspace, blade, Developer Studio and comes pre-bundled with latest Liferay DXP server. It also support to config a proxy using for download gradle dependencies.

 

If you want to upgrade from Studio 3.1 B1 or 3.1 GA versions, you need to add Oxygen updatesite and update to Oxygen first. Then you can upgrade through Help > Install New Software... dialog.

 

Upgrade From previous 3.1.x:

  1. Download updatesite here?
  2. Go to Help > Install New Software… > Add…
  3. Select Archive...Browse to the downloaded updatesite
  4. Click OK to close Add repository dialog
  5. Select all features to upgrade then click > Next, again click > Next and accept the license agreements
  6. Finish and restart to complete the upgrade

 

Release highlights:

  • Support Liferay Bundle 7.1
  • Bundle latest Liferay Portal

- bundle 7.1.0 Alpha in LiferayProjectSDKwithDevStudioCommunityEdition installers

- bundle DXP SP7 in LiferayProjectSDKwithDevStudioDXP installers

  • Third party plugins update

- update m2e to 1.8.2

- update bndtools to 4.0.0

- update gradle plugin buildship to 2.2.1

  • Code Update Tool

- more than 110 breaking changes for Liferay DXP/7

- improvements on auto fix

- performance improvement on finding breaking changes

  • Better Liferay Workspace Support

- update gradle workpsace version to 1.9.0

- update maven workspace

  • Liferay DXP/7 bundle support improvement

- integrate Liferay DXP SP7 support for Tomcat and Wildfly

- integrate Liferay 7 CE GA5 support for Tomcat and Wildfly

  • Better deployment support for Liferay DXP/7

- integration of Blade CLI 3.0.0

- support Plugins sdk 1.0.16

- support Liferay Workspace Maven

- support Liferay Worksapce Grade 1.9.0

  • Miscellaneous bug fixes
Feedback

If you run into any issues or have any suggestions please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2018-04-18T06:04:17Z
Categories: CMS, ECM

Bringing DropWizard Metrics to Liferay 7/DXP

Liferay - Mon, 04/16/2018 - 22:12
Introduction

So in any production system, there is typically a desire to capture metrics, use them to define a system health check, and then monitor the health check results from an APM tool to preemptively notify administrators of problems.

Liferay does not provide this kind of functionality, but it was functionality that I needed for a recent project.

Rather than roll my own implementation, I decided that I wanted to start from DropWizard's Metrics library and see what I could come up with.

DropWizard's Metrics library is well known for its usefulness in this space, so it is an obvious starting point.

The Metrics Library

As a quick review, the Metrics library exposes objects representing counters, gauges, meters, timers and histograms. Based upon what you want to track, one of these metric types will be used to store the runtime information.

In addition, there's also support for defining a health check which is basically a test to return a Result, basically a pass/fail, and it is intended to be combined with the metrics as a basis for the result evaluation.

For example, you might define a Gauge for available JVM memory. As a gauge, it will basically be checking the difference between the total memory and used memory. A corresponding health check might be created to test that available memory must be greater than, say, 20%. When available memory drops below 20%, the system is not healthy and an external APM tool could monitor this health check and issue notifications when this occurs. By using 20%, you are giving admins time to get in and possibly resolve the situation before things go south.

So that's the overview, but now let's talk about the code.

When I started reviewing the code, I was initially disheartened to see very little in the way of "design by interface". For me, design by interface is an indicator of how easy or hard it will be to bring the library into the OSGi container. With heavy design by interface, I can typically subclass key implementations and expose them as @Components, and consumers can just @Reference the interfaces and OSGi will take care of the wiring.

Admittedly, this kind of architecture can be considered overkill for a metrics library. The library developers likely planned for the lib to be used in java applications or even web applications, but likely never considered OSGi.

At this point, I really struggled with figuring out the best path forward. What would be the best way to bring the library into OSGi?

For example, I could create a bunch of interfaces representing the clean metrics and some interfaces representing the registries, then back all of these with concrete implementations as @Components that are shims on top of the Drop Wizard Metrics library. I soon discarded this because the shims would be too complicated casting things back and forth from interface to metrics library implementation.

I could have cloned the existing DropWizard Metrics GitHub repo and basically hacked it all up to be more "design by interface". The problem here, though, is that every update to the Metrics lib would require all of this repeated hacking up of their code to bring the updates forward. So this path was discarded.

I could have taken the Metrics library and used it as inspiration for building my own library. Except then I'd be stuck maintaining the library and re-inventing the wheel, so this path was discarded.

So I settled on a fairly light-weight solution that, I feel, is OSGi-enough without having to take over the Metrics library maintenance.

Liferay Metrics

The path I elected to take was to include and export the DropWizard Metrics library packages from my bundle and add in some Liferay-specific, OSGi-friendly metric registry access.

I knew I had to export the Metrics packages from my bundle since OSGi was not going to provide them and having separate bundles include their own copies of the Metrics jar would not allow for aggregation of the metrics details.

The Liferay-specific, OSGi-friendly registry access comes from two interfaces:

  • com.liferay.metrics.MetricRegistries - A metric registry lookup to find registries that are scoped according to common Liferay scopes.
  • com.liferay.metrics.HeallthCheckRegistries - A health check registry lookup to find registries that are scoped according to common Liferay scopes.

Along with the interfaces, there are corresponding @Component implementations that can be @Reference injected via OSGi.

Liferay Scopes

Unlike in a web application where there is typically like one scope, the application, Liferay has a bunch of common scopes used to group and aggregate details. A metrics library is only useful if it too can support scopes in a fashion similar to Liferay. Since the DropWizard Metrics library supports different metric registries, it was easy to overlay the common Liferay scopes over the registries.

The supported scopes are:

  • Portal (Global) scope - This registry would contain metrics that have no separate scope requirements.
  • Company scope - This registry would contain metrics scoped to a specific company id. For example, if you were counting logins by company, the login counter would be stored in the company registry so it can be tracked separately.
  • Group (Site) scope - This registry would contain metrics scoped to the group (or site) level.
  • Portlet scope - This registry would contain metrics scoped to a specific portlet plid.
  • Custom scope - This is a general way to define a registry by name.

Using these scopes, different modules that you create can lookup a specific metric in a specific scope without having tight coupling between your own modules.

Metrics Servlets

The DropWizard Metrics library ships with a few useful servlets, but to use them you need to be able to add them to your web application's web.xml file. In Liferay/OSGi, instead we want to leverage the OSGi HTTP Whiteboard pattern to define an @Component that gets automagically exposed as a servlet.

The Liferay Metrics bundle does just that; it exposes five of the key DropWizard servlets, but they use OSGi facilities and the Liferay-specific interfaces to provide functionality.

The following table provides details on the servlets:

Servlet Context Description CPU Profile /o/metrics/gprof Generates and returns a gprof-compatible file of profile details. Health Check /o/metrics/health-checks Runs the health checks and returns a JSON object with the results. Takes two arguments, type (for the desired scope) and key (for company or group id, plid or custom scope name). Metrics /o/metrics/metrics Returns a JSON object with the metrics for the given scope. Takes same two arguments, type and key, as described for the health checks servlet. Ping /o/metrics/ping Simple servlet that responds with the text "pong". Can be used to test that a node is responding. Thread Dump /o/metrics/thread-dump Generates a thread dump of the current JVM. Admin /o/metrics/admin A simple menu to access the above listed servlets.

The Ping servlet can be used to test if the node is responding to requests. The Metrics servlet can be used to pull all of the metrics at the designated scope and evaluated in an APM for alterting. The Health Check servlet can run health checks defined in code that perhaps needs access to server-side details to evaluate health, but they too can be invoked from an APM tool to evaluate health.

The CPU Profile and Thread Dump servlets can provide useful information to assist with profiling your portal or capturing a thread dump to, say, submit to Liferay support on a LESA ticket.

The Admin portlet, while not absolutely necessary, provides a convenient way to get to the individual servlets.

NOTE: There is no security or permission checks bound to these servlets. It is expected that you would take appropriate steps to secure their access in your environment, perhaps via firewall rules to block external access to the URLs or whatever is appropriate to your organization. Metrics Portlet

In addition, there is a really simple Liferay MVC portlet under the Metrics category, the Liferay Metrics portlet. This is a super-simple portlet which just dumps all of the information from the various registries. Can be used by an admin to view what is going on in the system, but if used it should be permissioned against casual usage from general users.

Using Liferay Metrics

Now for some of the fun stuff...

The DropWizard Metrics Getting Started page shows a simple example for measuring pending jobs in a queue:

private final Counter pendingJobs = metrics.counter(name(QueueManager.class, "pending-jobs")); public void addJob(Job job) { pendingJobs.inc(); queue.offer(job); } public Job takeJob() { pendingJobs.dec(); return queue.take(); }

Our version is going to be different than this, of course, but not all that much. Lets assume that we are going to be tracking the metrics for the pending jobs by company id. We might come up with something like:

@Component( immediate = true ) public class CompanyJobQueue { public void addJob(long companyId, Job job) { // fetch the counter Counter pendingJobs = _metricRegistries.getCompanyMetricRegistry(companyId).counter("pending-jobs"); // increment pendingJobs.inc(); // do the other stuff queue.offer(job); } public Job takeJob(long companyId) { // fetch the counter Counter pendingJobs = _metricRegistries.getCompanyMetricRegistry(companyId).counter("pending-jobs"); // decrement pendingJobs.dec(); // do the other stuff return queue.take(); } @Reference(unbind = "-") protected void setMetricRegistries(final MetricRegistries metricRegistries) { _metricRegistries = metricRegistries; } private MetricRegistries _metricRegistries; }

The keys here are that the MetricRegistries is injected by OSGi, and that class is used to locate a specific instance of the DropWizard Metrics registry instance where the metrics can be retrieved or created. Since they can be easily looked up, there is no reason to hold a reference to the metric indefinitely.

In the liferay-metrics repo, there are some additional examples that demonstrate how to leverage the library from other Liferay OSGi code.

Conclusion

So I think that kind of covers it. I've pulled in the DropWizard Metrics library as-is, I've exposed it into the OSGi container so other modules can leverage the metrics, I've provided an OSGi-friendly way to inject registry locators based on common Liferay scopes. There's the exposed servlets which provide APM access to metrics details and a portlet to see what is going on using a regular Liferay page.

The repo is available from https://github.com/dnebing/liferay-metrics, so feel free to use and enjoy.

Oh, and if you have some additional examples or cool implementation details, please feel free to send me a PR. Perhaps the community can grow this out into something everyone can use...

David H Nebinger 2018-04-17T03:12:36Z
Categories: CMS, ECM

Upcoming GDPR-focused features for Liferay DXP

Liferay - Mon, 04/16/2018 - 16:01

May 25 is fast approaching. Every business impacted by GDPR should be well underway in preparing for the changes to data processing set forth by the regulation. To address the heightened requirements for empowering users' control of their personal data, Liferay has been evaluating and building features into Liferay DXP to aid our customers in their journey toward compliance. I wanted to share what customers can expect in the upcoming release of Liferay Digital Enterprise 7.1 this summer (with an update to DE 7.0 scheduled thereafter with the same features).

But First... Before jumping into the details of what Liferay is building, allow me to reiterate something I've been stressing in our papers, blogs, and talks: GDPR compliance cannot be achieved by simply checking off a list of technical requirements. True compliance requires businesses to holistically adopt both organizational and technical practices of greater protection for their users' personal data. This may include training personnel, auditing all stored user data, establishing data breach response strategies, appointing a data protection officer, redesigning websites to obtain consent for targeted marketing, responding to users' right to be forgotten, etc. Beware of vendors that supposedly provide turnkey solutions for GDPR compliance, regardless of what they promise (and how much they cost). No such solution exists.   In regards to the technical measures GDPR stipulates, the heart of the regulation is encapsulated by the requirement of data protection by design and by default. As businesses select Liferay DXP to build their digital transformation solution, the responsibility falls on the business to design their solution in a way that satisfies this concept of “data protection by design and by default.”   Though no software product can truthfully claim to be “GDPR compliant,” the platform and tools provided by the product can greatly accelerate or hinder a business’s journey toward compliance. Out of the box, Liferay DXP already provides rich capabilities for designing and managing privacy-centric solutions (some of which are described in our Data Protection for Liferay Services and Software whitepaper), but there's much more we can provide to help our customers.   After wrestling with the couple hundred pages of regulation, we decided to first focus on the concrete requirements that are most painful for customers to implement themselves. Specifically, we evaluated GDPR's data subject rights and identified the right to be forgotten and right to data portability to be the most challenging to tackle given Liferay DXP’s current feature set. Google trends also affirms these two are of greatest interest (and likely anxiety) among users.   So here's what Liferay's engineering team has been working on:   Right To Be Forgotten The right to be forgotten (technically known as the “right to erasure”) requires organizations to delete an individual’s personal data upon his/her request (excluding data the organization has a legitimate reason to retain like financial records, public interest data, etc.). Personal data is considered erased when the data can no longer be reasonably linked to an identifiable individual and thus no longer subject to GDPR. This can be accomplished by simply deleting or carefully anonymizing the personal data. Proper anonymization is difficult and tedious but may be the preferred option depending on the business’s use case. For example, Liferay want to keep the technical content on our community forums, but we must sanitize the posts and scrub personal data if a user invokes his right to be forgotten.   Our engineering team is adding a tool to the user management section to review a user's personal data stored on Liferay. The UI will present the user's personal data per application (Blogs, Message Boards, Announcements, third-party apps, etc.). Administrators can either delete the data or edit the content in preparation for anonymization. For example, if a community member writes a blog post containing useful technical information (for example: DXP upgrade tips) but also started the blog with an anecdotal story that contains personal information (for example: “My daughter Alyssa once told me …”), an administrator may want to remove the personal story. After satisfactorily editing the content, the data erasure tool can automatically scrub data fields like userName and userId. The tool will also automatically scrub these data fields from system data tables like Layout and BackgroundTask.   Accompanying the UI is a programmatic interface to mark data fields potentially containing personal data. Any third-party application can implement these interfaces to surface personal data through the UI.  The interface also allows custom logic to anonymize or delete personal data. For example, instead of deleting a user's entire postal address, customers may want to keep just the zip code for analytics purposes.     Right To Data Portability The right to data portability requires organizations to provide a machine-readable export of a user's personal data upon request. The regulation's goal is to prevent vendor lock-in where users find the cost of switching service providers is too burdensome. In theory, this right empowers individuals to migrate their data from their current mortgage provider to a competitor, for example. The regulation even stipulates that organizations should transfer a user's personal data directly to another organization where “technically feasible,” though this likely won't be a reality in the near future.   Alongside our data erasure tool, our engineering team is building a tool to export a user's personal data. This will behave similar to Liferay's import/export pages feature except the focus will be on exporting personal data rather than page data. The administrator UI will list a user's personal data per application and asynchronously export the data.     Down The Road This is only the beginning of privacy-focused features we plan to bake into our platform. Though the roadmap for 7.2 is still up in the air, we're evaluating ideas like changes to service builder's data schema to potentially aid pseudonymization (separating personal data from identifiable individuals via some key). We've considered building a privacy dashboard for end users to visualize and control their own personal data. We've also thought about baking in a consent manager so businesses can better comply with the strengthened consent requirements.   Privacy is a justifiably growing concern that ultimately reaches beyond the territorial scope of GDPR. The May 25 deadline is forcing organizations to evaluate and implement the ethical impact of data collection in this brave new digital world. Currently much of that conversation stems from FUD leading to rubbish misinformation. But the dust will settle in the coming months and years. Organizations caught unprepared will potentially face costly penalties. Better and best privacy practices will eventually emerge and become standard practice, not unlike standard InfoSec practices that have developed over the last couple decades. Throughout that process, Liferay will continuously evaluate what our platform and services can provide to aid our customers in their journey toward thoughtfully guarding their users' data.   If you'd like to better understand how your organization can prepare for GDPR, check out our webinar: GDPR: Important Principles & Liferay DXP.     Dennis Ju 2018-04-16T21:01:59Z
Categories: CMS, ECM

Why Great B2B Customer Experiences are More Important Than Ever

Liferay - Mon, 04/16/2018 - 10:31

Modern and personalized customer experiences that rely on cutting-edge technology have played a major role in the business to consumer (B2C) market for many years. However, the business to business (B2B) market is beginning to rely on great user experiences more than ever, with many companies adopting user interfaces, such as portals, that reflect the personalized and fast experiences most often seen in the B2C market.

These great B2B user experiences are continuing to grow in importance for companies as more and more processes move online. According to Forrester, B2B eCommerce will account for 13.1% of all B2B sales in the United States by the year 2021, indicating a steady increase for the foreseeable future when compared to the 11% share of B2B eCommerce seen in 2017.

With B2B digital experiences continuing to play an increasingly crucial role in the long-term success of companies, it is important that businesses work to improve and refine their online presence. But the question remains, what makes a great B2B user experience?

The Influence of B2C on B2B User Experience

According to McKinsey research, B2B customer experience index ratings rank far lower than their B2C counterparts, with the average B2B company scoring below 50 percent compared to the typical 65 to 85 percent scored for B2C companies. This indicates that the majority of B2B customer experience audience members are dissatisfied with their online interactions with companies in the industry.

While there is a difference between the audiences and goals of B2B and B2C companies, the modern customer does not necessarily distinguish between the two in their minds. B2B customers interact with B2C experiences every day, such as shopping on Amazon for their own personal needs. These B2C companies are continually providing the latest in digital experiences in an effort to compete with one another in ways that may not be seen as often in the B2B realm. The result is that consistently rising customer expectations regarding B2C experiences are migrating to the B2B sphere.

Today’s B2B audience has grown to expect well-designed user interfaces that remember their interests, provide services and products that predict needs based on past purchases and more features that make the journey quick and easy to navigate.

What is Holding Back Your B2B User Experience?

As discussed by Customer Think, only 17% of B2B companies have fully integrated customer data throughout the organization, which means that the decisions being made by these businesses are often based on flawed or incomplete data insights. Should a company be unable to access customer insights from all departments, such as customer service or social media, they may miss out on specific aspects of the experience that highly influence the overall quality of business interactions, as well as data that can provide a more accurate view of each audience member.

Beyond gathering data to enhance experiences, businesses may not have the capabilities needed to completely control and execute their customer experience strategy. Research by Accenture found that only 21% of B2B companies have total control over their sales partners, who are largely responsible for delivering CX to their audience. If a business is unable to determine how, when and to whom these experiences are provided, even a well-constructed B2B user interface can result in an unsuccessful experience.

Back-end integration that allows greater and more accurate access to customer data, modern interfaces that allow for personalization based on individual needs and improved delivery systems governing how these interfaces are provided to audience members can greatly enhance a company’s modern B2B user experience.

How Can a Great Customer Experience Impact Your B2B Relationships?

Great customer experience strategies work to create an environment that is free of friction and provides users with a journey that meets their every need as quickly and easily as possible. While B2B audiences may not be as likely to abandon a shopping experience or choose a competitor due to poor experiences as B2C audiences, the impact of experiences on long-term relationships is steadily increasing.

According to research regarding B2C and B2B experiences by The Tempkin Group, 86 percent of those who receive a great customer experience are likely to return for another purchase. However, the study also found that only 13 percent of people who had a sub-par customer experience will return. In addition, engaged and satisfied customers will buy 50% more frequently and spend 200% more annually, as found by Rosetta.

The importance of creating great B2B experiences is not just in keeping up with competitors and audiences, it also has a positive impact on company performance. As shown by McKinsey, B2B companies that transformed their customer experience processes saw benefits similar to those seen by B2C companies, including a 10 to 15 percent revenue growth, higher client satisfaction scores, improved employee satisfaction and a 10 to 20 percent reduction in operational costs.

The combination of these benefits means a higher ROI on B2B operations, supporting the company as a whole.

Create an Effective B2B Customer Experience

A well-crafted customer experience will help to meet your audience needs and encourage long-term client relationships, and it all begins with an effective strategy. Learn more about what strategy is right for you with our whitepaper insights.

Read “Four Strategies to Transform Your Customer Experience”   Matthew Draper 2018-04-16T15:31:44Z
Categories: CMS, ECM

Liferay 7/DXP: Making Logging Changes Persistent

Liferay - Mon, 04/16/2018 - 09:26
Introduction

I have never liked one aspect of Liferay logging - it is not persistent.

For example, I can't debug a startup issue unless I get the portal-log4j-ext.xml file set up and out there.

Not so much of a big deal as a developer, but as a portal admin if I use the control panel to change logging levels, I don't expect them to be lost just because the node restarts.

Solution

So about a year ago, I created the log-persist project.

The intention of this project is to persist logging changes.

The project itself contains 3 modules:

  • A ServiceBuilder API jar to define the interface over the LoggingConfig entity.
  • A ServiceBuilder implementation jar for the LoggingConfig entity.
  • A bundle that contains a Portlet ActionFilter implementation to intercept incoming ActionRequests for the Server Administration portlet w/ the logging config panel.

The ServiceBuilder aspect is pretty darn simple, there is only a single entity defined, the LoggingConfig entity which represents a logging configuration.

The action is in the ActionFilter component. This component wires itself to the Server Administration portlet. All incoming ActionRequests (meaning all actions a user performs on the Server Administration portlet) will be intercepted by the filter. The filter passes the ActionRequest on to the real portlet code, but upon return from the portlet code, the filter will check the command to see if it was the "addLogLevel" or "updateLogLevels" commands, the ones used in the portlet to change log levels. For those commands, the filter will extract the form values and pass them to the ServiceBuilder layer to persist.

Additionally the filter has an @Activate method that will be invoked when the component is started. In this method, the code pulls all of the LoggingConfig entities and will re-apply them to the Liferay logging configuration.

All you need to do is build the 3 modules and drop them into your Liferay deploy folder, they'll take care of the rest.

Conclusion

So that's it. I should note that the last module is not really necessary. I mean, it only contains a single component, the ActionFilter implementation, and there's no reason that it has to be in its own module. It could certainly be merged into the API module or the service implementation module.

But it works. The logging persists across restarts and, as an added bonus, will apply the logging config changes across the cluster during startup.

It may not be a perfect implementation, but it will get the job done.

You can find it in my git repo: https://github.com/dnebing/log-persist

David H Nebinger 2018-04-16T14:26:22Z
Categories: CMS, ECM

Liferay Portal 7.1 Alpha 1 Release

Liferay - Thu, 04/12/2018 - 17:51
I'm pleased to announce the immediate availability of: Liferay Portal 7.1 Alpha 1
 
  Download Now!

We announced the Liferay 7.1 Community Beta Program on February 19th alongside our first 7.1 Milestone release.  We launched the first phase of the community beta program which was to receive feedback from our community on new features being released in each milestone release.  Our awesome community heeded the call with over 120 participants and over 130 posts to the feedback forum.  We greatly appreciate all the feedback generated from our community.  Based on some of the feedback we even made changes to the product itself! 

With that being said it is my pleasure to announce Liferay 7.1 Alpha 1.  With the release of Liferay 7.1 Alpha 1 we also would like to launch phase 2 of our beta program: Bug Reports.  If you run into an issue using Alpha 1, please let us know by posting it in our Feedback Forums.  If you have yet to sign up for the beta program, it's never too late.  Sign up today!

New Features Summary Modern Site Building: Liferay 7.1 introduces a new way of adding content.  Fragments allows a content author to create content in small reusable pieces.  Fragments can be edited in real time or can be exported and managed with the tooling of your choice.  Use page templates from within a site and have complete control over the layout of your content pages.  Navigation menus now give you complete control over site navigation.  Create site navigation in new and interesting ways and have full control over the navigations visual presentation.        Forms Experience: Liferay 7.1 includes a completely revamped forms experience.  Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form.  Using the new Element Sets, form creators can now create groups of reusable components.  Forms fields can now be translated into any language using any Liferay locale and can also be easily duplicated. 
    Redesigned System Settings: System Settings has received a complete overhaul.  Configurations have been logically grouped together making it easier than every before to find what's configurable.  Several options that were located on Server Administration have also been moved to System Settings.     User Administration: User account from has been completely redesigned.  Each form section can now be saved independently of each other minimizing the chance of losing changes.  The new ScreensNavigationEntry let's developers add any form they want to user administration.     Improvements to Blogs and Forums:  Blog readers a can now un-subscribe to notifications via email. Friendly URLs used to be generated based on the entries title. Authors now have complete control over the friendly URL of the entry.   Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.     Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch.  Message boards users can now attach as many files as they want by dragging and dropping them in a post.  Message boards also has had many visual updates.     Workflow Improvements: Workflow has received a complete UI overhaul.  All workflow configuration is now consolidated under one area in the Control Panel.  Workflow definitions are now versioned and previous versions can now be restored.  Workflow definitions can now be saved in draft form and published live when they are ready.     Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 and the inclusion of Tomcat 9.0.  At the time of this release  JDK 8 is still the only supported JDK.   Known Issues Documentation Documentation for Liferay 7.1 is well underway.  Many sections have already been completed in the Deployment and Development Sections.  For information on upgrading to 7.1 see the Upgrade Guide. Jamie Sammons 2018-04-12T22:51:22Z
Categories: CMS, ECM

How to Measure Customer Experiences

Liferay - Thu, 04/12/2018 - 12:28

A well-constructed and effective customer experience is a crucial part of business strategies today. No matter the industry, customer experiences (CX) that meet target audience needs and help convert them into customers are a crucial part of continued commercial success. But due to its often difficult to define nature, CX can be difficult for companies to gauge in order to determine its value and make changes that improve ROI.

However, measuring CX successfully can be done with the right insights and, when done correctly, can help businesses effectively shape and refine their strategies.

Requirements for Measuring Customer Experience

According to Gartner, there are three conditions that must be met by companies in order to successfully implement customer experience measurements.

  1. Measure CX Across Levels of Management - Companies should work to understand how customer experience impacts various levels of management, ranging from C-suite executives to operational leaders across the organization. These various measurements show how CX affects business outcomes, cross-departmental issues, department tactics and more. Should you only focus on one level, valuable insights may be missed.
  2. Include Metrics from All Departments - As opposed to the vertical nature of the first condition, this second condition is horizontal in nature and is meant to encompass the many different teams that make up a company. A metric such as customer satisfaction can vary between departments and reflect how each impacts the experience. Measuring such a metric across a business will provide comprehensive data and insights regarding where improvements should be made and how they may need to differ depending on the department.
  3. Balance the Rational with the Emotional - Businesses should not only measure the quality of the services provided, but the emotions they provoke within customers. Customers will have an emotional reaction to the treatment they receive from a company and these reactions will influence rational decisions. The more they love an experience, the more loyal they will become, and the more they dislike it, the more likely they are to leave for a competitor.

Having these conditions in place will help your team correctly approach metrics to better prevent accidentally skewing data and for an effective application of CX improvements across the company.

Measuring Customer Experience

Because customer experience involves all aspects of a consumer interacting with a business, there are many elements that may be measured by an organization. However, the following aspects can provide highly useful CX insights, regardless of the industry of a company.

  • Customer Satisfaction Scores - Companies should extensively poll customer satisfaction across all departments and keep detailed records to better understand weak points and areas for potential improvement, which can boost customer experience as a whole.
  • Product or Service Quality Metrics - Beyond being satisfied with their experiences, customers should be enabled to rate the products and services they receive. Whether this is through a third-party site or on the company’s own, receiving evaluations of products can show if CX issues are caused by what customers purchase, rather than how a company provides it.
  • Employee Engagement - A workforce that is committed to your company’s goals and values and is determined to perform their best means a more effective team. By measuring their investment in the company through anonymous surveys and performance evaluations, you can better determine how well a team is performing their duties. The result is an ability to equip and train your team as needed.
  • First Call Resolution Rates - An effective customer service center will have a higher likelihood of resolving customers’ problems during their first call or online chat. The higher the resolution rate, the more effective your service. Low rates should be a sign that your customer service is in need of improvement for a better customer experience.
  • Net Promoter Score - Beyond customer satisfaction, a net promoter score determines customer loyalty with one question - “How likely is it that you would recommend our company/product/service to a friend or colleague?” Based on a score of 0 to 10, a company can determine which customers will likely buy more and refer, as well as which will likely not return, according to Forbes. These insights help determine the longevity of a customer base and what should be done to improve CX.
Improving Your Customer Experience

Following a successful customer experience evaluation, companies will have the opportunity to make improvements to various aspects of the experience as needed. Once you have collected your CX metrics, consider the following recommendations on how you should take action.

  • Don’t focus on one major customer experience metric, but take multiple lower-level metrics, such as how various departments field a complaint about a product, into account for a more balanced review of CX.
  • Consider the effects of a change on all departments. While a specific department may have been in charge of tracking a metric, changes regarding how your company does business and interacts with customers must be considered to prevent negatively affecting some departments while improving others.
  • Take customer emotions into consideration. While rational, statistically-backed CX changes are crucial, remember how your audience will emotionally react to the changes you make, both positively and negatively.
  • Determine a hierarchy of metrics to guide CX plans and how you invest both time and money in your improvement efforts. Decide upon what statistics are most important for your company’s performance and what changes should be prioritized when planning both time and financial investment.

By following these guidelines, a company can approach their metrics and CX improvement efforts through a comprehensive strategy.

Transform Your Customer Experience

Whether you are looking to change specific aspects or completely overhaul your customer experience, find the strategic insights you need in our helpful whitepaper.

Read “Four Strategies to Transform Your Customer Experience”   Matthew Draper 2018-04-12T17:28:23Z
Categories: CMS, ECM

Liferay Faces downloads at an all time high

Liferay - Wed, 04/11/2018 - 10:15
I'm happy to report that according to the following graph from Maven Central, the download stats for Liferay Faces are trending upward. In fact, our downloads have approximately doubled with an all time high surpassing 11,000 downloads/month!   The download stats encompass all artifacts such as:
  • Liferay Faces Bridge API
  • Liferay Faces Bridge Impl
  • Liferay Faces Bridge Ext
  • Liferay Faces Alloy
  • Liferay Faces Portal
  • Liferay Faces Util
  • Demo portlets
  • Parent pom.xml descriptors
What's more is that fine-grained download stats show that JSF 2.2 + Liferay 7.0 is the strongest adoption of JSF within Liferay since we started back in 2012.   I would like to personally thank Vernon Singleton, Kyle Stiemann, Cody Hoag, Philip WhiteJuan Gonzalez, and everyone else who has helped make Liferay Faces such a successful project over the years. Also thanks to our faithful JSF community that keeps in close contact with us via the Liferay Faces forums. Well done all! Neil Griffin 2018-04-11T15:15:13Z
Categories: CMS, ECM

Cuatro formas en las que el análisis predictivo en el sector de retail dispara el desempeño de la tienda

Liferay - Tue, 04/10/2018 - 08:47

Los estudios demuestran que, más que nunca, los clientes quieren que sus marcas preferidas prevean sus deseos y necesidades. La compra de un artículo ya no se considera un evento aislado, sino que es parte de una experiencia integrada y continua que desvanece la frontera entre la compra online y offline. Si bien los cambios en las tendencias tecnológicas han tenido efectos evidentes en el retail y, particularmente, en las tiendas físicas de los Estados Unidos, donde ha habido cierres masivos, integrar la tecnología moderna digital con el placer de las compras en tienda física, proporcionará inumerables beneficios para las compañías del sector.

Los beneficios del retail omnicanal incluyen ser capaz de reunir datos antes no explotados sobre el comportamiento de los compradores y de sus interacciones con las marcas. El uso de los datos adquiridos posee un potencial gigantesco. Sin embargo, una de sus formas de uso más impactantes en el universo del retail es la de crear un análisis predictivo del comportamiento de compra del cliente. A través de la comprensión de cómo las acciones pasadas de los compradores impactan sus decisiones futuras, un buen análisis puede anticipar y satisfacer las necesidades permitiendo, además, la creación de estrategias de marketing exitosas, tanto online como offline.

A continuación listamos algunos beneficios del análisis predictivo en el sector de retail y cómo pueden ayudar a determinar el futuro de las empresas del sector de formas que antes se consideraban imposibles.

1. Promociones dirigidas y optimizadas para un público objetivo

Las promociones dirigidas a un público objetivo se utilizan en empresas de todo tipo de industria para agregar una capa de personalización a su comunicación y mejorar su relación con el cliente. Sin embargo, cuando están mal planteadas, estas promociones pueden tener el efecto contrario al deseado. Un estudio hecho por Access Development halló que el 57% de los encuestados considera que recibir un anuncio de un producto después de haberlo valorado negativamente es una de las principales razones para la que cortar la relación con una marca.

Una orientación adecuada de tus promociones significa tener un conocimiento profundo de cada cliente, lo que además te va a aportar la información necesaria sobre las ofertas que reciben. Esto incluye estar al tanto del comportamiento pasado en cuanto a compras y ser capaz de prever cuáles serán sus necesidades futuras, como pueden ser productos complementarios a los que ha comprado u ofertas de reposiciones en base a sus patrones de comportamiento, como puede ser el caso de los cartuchos de tinta para impresoras. Esto puede mejorar las interacciones del cliente con la marca tanto online como en la tienda física, además de fortalecer la relación entre ambos.

2. Búsqueda predictiva

Los websites modernos ayudan a los clientes a encontrar lo que están buscando de forma rápida a través de herramientas de búsqueda eficientes, diseñadas para localizar los resultados adecuados y, por tanto, reducir el tiempo que utilizan para encontrar la respuesta. En este sentido, el uso del análisis predictivo te acerca un paso más al utilizar la personalización para prever qué van a buscar los clientes. Esto incluye tanto autocompletar las búsquedas nada más empezar a escribir la pregunta, como mostrar en las landing pages productos y servicios que los usuarios pueden venir a buscar antes mismo que empiecen a hacerlo. El sistema de analítica de Amazon es uno de los mejores ejemplos a día de hoy. El sistema hace que los usuarios vuelvan siempre a la página web y se interesen por productos que probablemente no verían.

Un sistema de análisis predictivo potente comprende adecuadamente el comportamiento del usuario y hace previsiones precisas, útiles y que van a incentivar el retorno del cliente a la tienda para finalizar su compra sin causarle molestias o interrupciones. Sin embargo, es muy importante que esas previsiones sean lo más fieles posible, ya que ofrecer resultados incorrectos o no deseados pueden complicar el proceso de búsqueda o, potencialmente, ofender a los clientes, tal como ha pasado con un anuncio reciente de embarazos de la empresa Target. Cuando se hacen de forma correcta, habrá menos posibilidades de que los usuarios de la web exploren sus opciones con la competencia y más tendencia a volver a tu web para compras futuras.

3. Gestión optimizada del inventario

El uso de procesos de compra personalizados no solo se trata de facilitar las interacciones con los clientes, sino también de asegurarse de que tus tiendas estén adecuadamente aprovisionadas y preparadas para sus demandas. Esto va a disminuir las probabilidades de frustración del cliente causadas por la falta de producto y la reducción de los costes ya que no tendrás la necesidad realizar envíos extras de producto. Tal como se explica en el artículo de Harvard Business Review, prever la demanda es mucho más efectivo para reducir costes y determinar las cantidades de los inventarios de lo que puede ser basar el stock en el total agregado de ventas, ya que el uso de la analítica va a favorecer hacer previsiones hiperlocales que, a su vez, permitirán distribuir el stock geográficamente.

Según la investigación de Accenture, solo un tercio de los retailers actualmente ofrecen a su audiencia las funciones básicas de la omnicanalidad, como puede ser una experiencia satisfactoria en tienda y un inventario visible y accesible en múltiples canales. Considera como el recorte de costes a través del análisis predictivo puede impactar en tu empresa y de qué forma puedes satisfacer las necesidades de tu cliente creando una estrategia omnicanal del retail, capaz de conectar tu tienda online a la física.

4. Relación continua con el cliente

A los clientes les gusta sentir que las marcas les conocen de forma individual, antes, durante y después del proceso de compra. Un estudio de Rosetta Consulting indica que los clientes que están comprometidos con una marca van a finalizar sus compras un 90% de veces más que los que no están muy comprometidos. Además, el estudio también concluye que estos clientes suelen gastar un 60% más por transacción. A través del uso del análisis predictivo, el retail omnicanal ayuda a las empresas a demonstrar a sus clientes que les conocen y saben cuáles son sus necesidades. Las compras hechas en la tienda física serán reflejadas online y, a su vez, las compras online también podrán ser reflejadas en la tienda física de manera que los empleados de la tienda puedan identificar a cada cliente de forma individual y responder rápidamente a sus cuestiones, creando una relación continua y sin fricciones entre cliente y marca.

Estos tres aspectos del análisis predictivos están enfocados en mejorar la experiencia y el compromiso del cliente. Las compras predictivas crecerán en importancia en los próximos años para los retailers a medida que sus audiencias empiecen a esperar de las compañías ofertas relacionadas con su comportamiento y estas compañías sean capaces de ofrecer este tipo de servicio, a través de un estudio de las necesidades e intereses de sus clientes. Cuando se utilicen correctamente, estas funcionalidades podrán demostrar que tu marca se preocupa por la audiencia y se encuentra a la vanguardia de la tecnología en compras.

Equipa tu marca con análisis predictivo

Si bien cada una de las marcas de retail tendrá que determinar el papel que juega el análisis predictivo en su estrategia, sus efectos en cómo evoluciona y la precisión de las reacciones de cada tienda a las necesidades del cliente pueden ser de gran ayuda en este momento en que las tendencias de la industria cambian frecuentemente. Desarrollar tus sistemas de front-end y back-end en una plataforma que puede recopilar datos de los clientes y generar insights prácticos, te ayudará a comprender mejor tu base de clientes y tomar medidas efectivas lo antes posible.

Adopta la nueva era del Retail Descubre las últimas estrategias del retail

Adopta la nueva era del Retaill

Las tendencias tecnológicas del sector de retail están cambiando, pero esto no quiere decir que te debas quedar atrás. Conoce más sobre los efectos de la era digital moderna en la industria de retail y proporciona a tu equipo las herramientas necesarias para conseguir el éxito.

Descubre las últimas estrategias del retail  Rebeca Pimentel 2018-04-10T13:47:00Z
Categories: CMS, ECM

BND Instruction To Avoid

Liferay - Mon, 04/09/2018 - 23:08
Introduction

Recently I was building a fragment bundle to expose a private package per my blog entry, . In the original bnd.bnd file, I found the following:

-dsannotations-options: inherit

Not seeing this before, I had to do some research...

Inheriting References

So I think I just gave it away.

When you add the instruction to your bnd.bnd file, the class heirarchy is searched and all @Reference annotations on parent classes will be processed as if they were defined in the base class.

Normally, if you have a Foo class with an @Reference and a child Bar class, the parents references are not handled by OSGi. Instead, you need to add an @Reference annotation to the Bar class and have it call the super classes setter method (it is also why you should always use your @Reference annotations on protected setters instead of private members, because a subclass may need to set the value).

Once you add the dsannotations instruction to your bnd.bnd file, you no longer have to copy all of those @Reference annotations into the subclasses.

My first thought was that this was cool, this would save me from so much @Reference copying. Surely it would be an instruction I'd want to use like all of the time...

Avoid This Instruction

Further research led me to a discussion about supporting @Reference in inheritance found here: https://groups.google.com/forum/#!topic/bndtools-users/6oKC2e-24_E

It turns out that this can be a rather nasty implementation issue. Mainly if you split Foo and Bar to different bundles, the contexts are different. So when processing Bar in a different bundle, it has its own context, class loader, etc from the bundle that has the Foo parent class. I know that OSGi appears to be magic in how it is able to apparently cross these contexts without us as developers realizing how, but there's actually some complicated stuff going on under the hood, stuff that you and I really don't want to know too much about.

But for us to correctly and effectively use the dsannotations inheritance, we would have to know a lot more about how this context stuff worked.

Effectively, it's a can of worms, one that you really don't want to rip the lid off of.

So we need to avoid using this instruction, if for that reason alone.

A more complete response, though, comes from Felix Meschberger:

You might be pleased to hear that at the Apache Felix project we once had this feature in our annotations. From that we tried to standardize it actually.

The problem, though, is that we get a nasty coupling issue here between two implementation classes across bundle boundaries and we cannot express this dependency properly using Import-Package or Require-Capability headers.

Some problems springing to mind:

  • Generally you want to make bind/unbind methods private. Would it be ok for SCR to call the private bind method on the base class ?(It can technically be done, but would it be ok).

  • What if we have private methods but the implementor decides to change the name of the private methods — after all they are private and not part of the API surface. The consumer will fail as the bind/unbind methods are listed in the descriptors provided by/for the extension class and they still name the old method names.

  • If we don’t support private method names for that we would require these bind/unbind methods to be protected or public. And thus we force implementation-detail methods to become part of the API. Not very nice IMHO.

  • Note: package private methods don’t work as two different bundles should not share the same package with different contents.

We argued back then that it would be ok-ish to have such inheritance within a single bundle but came to the conclusion that this limitation, the explanations around it, etc. would not be worth the effort. So we dropped the feature again from the roadmap.

If I Shouldn't Use It, Why Is Liferay?

Hey, I had the same question!

It all comes down to the Liferay code base. Even though it is now OSGi-ified code, it still has a solid connection to the historical versions of the code. Blogs, for example, are now done via OSGi modules, but a large part of the code closely resembles code from the 6.x line.

The legacy Liferay code base heavily uses inheritance in addition to composition. Even for the newer Liferay implementation, there is still the heavy reliance on inheritance.

The optimal pattern for OSGi is one of composition and lighter inheritance; it's what makes the OSGi Declarative Services so powerful, I can define a new component with a higher service ranking to replace an existing component, I can wire together components to compose a dynamic solution.

Liferay's heavy use of inheritance, though, means there's a lot of parent classes that would require a heck of a lot of child class @Reference annotation copying in order to complete injection in the class heirarchy.

While there are plans to rework the code to transition to more composition and less inheritance, this will take some time to complete. Instead of forcing those changes right away and to eliminate the @Reference annotation copying, they have used the -dsannotations-options instruction to force the @Reference annotation processing in the class heirarchy. Generally this is not a problem because the inheritance is typically restricted within a single bundle, so the context change issues are not a problem, although the remainder of the points Felix raised are still a concern.

Conclusion

So now you know as much as I do about the -dsannotations-options BND instruction, why you'll see it in Liferay bundles, but more importantly why you shouldn't be using it in your own projects.

And if you are mucking with Liferay bundles, if you see the -dsannotations-options instruction, you'll now know why it is there and why you need to keep it around.

David H Nebinger 2018-04-10T04:08:33Z
Categories: CMS, ECM

Using Private Module Binaries as Dependencies

Liferay - Mon, 04/09/2018 - 20:05

When you compare a CE release with an EE release, you'll find that there are a few additional modules that are only available in EE releases. In Liferay terms, these are called "private" modules. They are private in the sense that their source code doesn't exist in any public GitHub repositories (only private ones, and usually inside of a folder named "private"), and their binaries and corresponding source code aren't published to repository.liferay.com.

From a new Liferay developer perspective, the main roadblack you might encounter with them is when you want to consume API exposed by one of those private modules, or if you want to extend one of those modules. Essentially, you run into an obstacle immediately: there are no repositories for you to use, so your build-time dependencies are never satisfied.

For a seasoned java developer, you would quickly realize that you can use Maven in order to install any JARs you need, and both Maven projects and Gradle projects would able to use those installed JARs. However, not everyone is as savvy about this sort of thing, so I thought it would be a good idea to create a blog entry to walk through the process.

Script Creation

All the JARs are present inside of the osgi/marketplace folder, buried inside of .lpkg files. So, as a first step to get at the .jar files, you might create a temporary folder (which we'll call temp), and then extract each .jar to that temporary folder.

install_lpkg() { mkdir -p temp unzip -uqq "$lpkg" -d temp rm -rf temp }

With each JAR file, the next thing you'd want to do is install it to your local Maven repository, following Apache's guide to installing 3rd party JARs. This leads you to these commands, which assume you have some sense of the artifact ID and artifact version for the JAR.

install_jar() { mvn install:install-file -Dpackaging=jar -Dfile=$1 -DgroupId=com.liferay -DartifactId=${ARTIFACT_ID} -Dversion=${VERSION} } for jar in temp/*.jar; do install_jar "$jar" done

Well, how would you uncover the artifact ID and artifact version for the JAR? Well, as outlined in our Configuring Dependencies documentation found in the Developer Guide, all of the module JARs that are included with LPKGs use the bundle symbolic name for the artifact ID and the bundle version for the artifact version. Since these are both stored in the JAR manifest, this means that once you have a .jar file, you can extract the intended artifact ID and the intended artifact version fairly easily.

ARTIFACT_ID=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-SymbolicName | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g') VERSION=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-Version | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g')

You could iterate over every .jar and do this, but all of the public .jar files exist in repository.liferay.com, with the appropriate source code (which most IDEs will auto-download). Because using the .lpkg bundle means you have no source, it's better to restrict your installation to only those modules that are private.

How do you differentiate between a private module and a public module? Well, you could just compare CE releases and EE releaes, but there's a slightly easier way to do so. It turns out that when Liferay bundles the artifact, it adds a header Liferay-Releng-Public to each artifact to indicate whether it was intended to be private or public. This means you can check using the Liferay binary itself, without crawling Liferay's public Maven repository, you can figure out which ones are not available in public repositories and limit your installation to those artifacts.

if [ "" == "$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Liferay-Releng-Public | grep -F false)" ]; then return 0 fi Script Completion

Combining all of those elements together leaves you with the following script. Simply run from the osgi/marketplace folder, and it will extract your .lpkg files, and install any non-public .jar files to your local Maven repository.

#!/bin/bash install_jar() { if [ "" == "$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Liferay-Releng-Public | grep -F false)" ]; then return 0 fi local ARTIFACT_ID=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-SymbolicName | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g') local VERSION=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-Version | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g') mvn install:install-file -Dpackaging=jar -Dfile=$1 -DgroupId=com.liferay -DartifactId=${ARTIFACT_ID} -Dversion=${VERSION} } install_lpkg() { mkdir -p temp unzip -uqq "$1" -d temp for jar in temp/*.jar; do install_jar "$jar" done rm -rf temp } shopt -s nullglob for lpkg in *.lpkg; do install_lpkg "$lpkg" done

If you need to publish to a remote repository, simply replace mvn install:install-file with mvn deploy:deploy-file as outlined in Apache's guide to deploying 3rd party JARs to remote repository, and provide the additional parameters: the repositoryId and the url to the repository you wish to publish to.

Minhchau Dang 2018-04-10T01:05:23Z
Categories: CMS, ECM

Overriding Component Properties

Liferay - Sat, 04/07/2018 - 10:33
Introduction

So once you've been doing some Liferay OSGi development, you'll recognize your component properties stanza, most commonly applied to a typical portlet class:

@Component( immediate = true, property = { "com.liferay.portlet.add-default-resource=true", "com.liferay.portlet.display-category=category.hidden", "com.liferay.portlet.layout-cacheable=true", "com.liferay.portlet.private-request-attributes=false", "com.liferay.portlet.private-session-attributes=false", "com.liferay.portlet.render-weight=50", "com.liferay.portlet.use-default-template=true", "javax.portlet.display-name=my-controlpanel Portlet", "javax.portlet.expiration-cache=0", "javax.portlet.init-param.template-path=/", "javax.portlet.init-param.view-template=/view.jsp", "javax.portlet.name=" + MyControlPanelPortletKeys.MyControlPanel, "javax.portlet.resource-bundle=content.Language", "javax.portlet.security-role-ref=power-user,user", "javax.portlet.supports.mime-type=text/html" }, service = Portlet.class ) public class MyControlPanelPortlet extends MVCPortlet { }

This is the typical thing you get when you use the Blade tools' "panel-app" template.

This is well and good, you're in development and you can edit these as you need to add, remove or change values.

But what can you do with the OOTB Liferay components, the ones that are compiled into classes packaged into a jar which is packaged into an LPKG file in the osgi/marketplace folder?

Overriding Component Properties

So actually this is quite easy to do. Before I show you how, though, I want to show what is actually going on...

So the "property" stanza or the lesser-used "properties" one (this one is used to identify a file to use for component properties), these are actually managed by the OSGi Configuration Admin service. Because it is managed by CA, we actually get a slew of functionality without even knowing about it.

The @Activate and @Modified annotations that allow you to pass in the properties map? CA is participating in that.

The @Reference annotation target filters referring to property values? CA is participating in that.

Just like CA is at the core of all of the configuration interfaces and Liferay's ConfigurationProviderUtil to fetch a particular instance, these properties can be accessed in code in a similar way.

The other thing that CA brings us, the thing we're going to take advantage of here, is that CA can use override files w/ custom property adds/updates (sorry, no deletes).

Lets say my sample class is actually com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet. To override the properties, I just have to create an osgi/configs/com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet.cfg file. Note the importance of a) the location where the file goes, b) the file name is the full package/class name and c) the file has either the .cfg or .config extension and conforms to the appropriate CA formatting for the type.

The .cfg format is the simplest of the two, it actually follows a standard property file format. So if I wanted to override the category, to expose this portlet so it can be dropped on a page, I could put the following in my osgi/configs/com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet.cfg file:

com.liferay.portlet.display-category=category.sample

That's all there is to it. CA will use this override when collecting the component properties and, when Liferay is processing it, will treat this portlet as though it is in the Sample category and allow you to drop it on a page.

In a similar way you can add new properties, but the caveat is that the code must support them. For example, the MyControlPanelPortlet is not instanceable; I could put the following into my .cfg file:

com.liferay.portlet.instanceable=true

I'm adding a new property, one that is not in the original set of properties, but I know the code supports it and will make the portlet instanceable.

Conclusion

Using this same technique, you can override the properties for any OOTB Liferay component, including portlets, action classes, etc.

Just be sure to put the file into the osgi/configs folder, name the file correctly using full path/class, and use the .cfg or .config extensions with the correct format.

You can find out more about the .config format here: https://dev.liferay.com/discover/portal/-/knowledge_base/7-0/understanding-system-configuration-files

David H Nebinger 2018-04-07T15:33:43Z
Categories: CMS, ECM

Liferay and Docker: Upgrade Liferay to 7.0 GA6

Liferay - Sat, 04/07/2018 - 07:40

Liferay Portal 7.0 CE GA6 Release was announced about 2 weeks ago and Liferay containerisers may desire to upgrade their Docker container to the new Liferay version. Well, this is a not so hard task to accomplish, but some steps can be not so obvious the first time one faces them. This is the reason behind this little guide on how to migrate from GA5 to GA6 inside a Docker container.

Local environment update

The first step is to migrate the local development environment to the next Liferay version. This phase is equal for both normal and containerised workspaces. In order to update the local environment,it's necessary to:

  • Update the liferay.workspace.bundle.url property inside gradle.properties file to
    liferay.workspace.bundle.url=https://cdn.lfrs.sl/releases.liferay.com/portal/7.0.5-ga6/liferay-ce-portal-tomcat-7.0-ga6-20180320170724974.zip
  • Run the bundle/initBundle gradle task
Docker container update

Now that the development workspace has been migrated, it's necessary to update the Liferay Docker container. The liferay.home path on the new container may differ from the path inside the GA5 container. For the sake of convenience, GA6_LIFERAY_HOME variable will be used to refer to the liferay.home path on the new container, while GA5_LIFERAY_HOME refers to the liferay.home path inside the old container. For Liferay containers inside my Github repo, the two liferay.home paths are the following

GA5_LIFERAY_HOME=/usr/local/liferay-ce-portal-7.0-ga5 GA6_LIFERAY_HOME=/usr/local/liferay-ce-portal-7.0-ga6

In order to update the Docker container, it's necessary to:

  • Change the Docker image inside docker-compose.yml file
    image: glassofwhiskey/liferay-portal:7.0-ce-ga6-dev
  • Update all portal container volumes inside docker-compose.yml so that they stop pointing to GA5_LIFERAY_HOME, but point to GA6_LIFERAY_HOME instead
    volumes: - liferay-document-library:GA6_LIFERAY_HOME/data/document_library - ${LIFERAY_BUNDLE_DIR}/osgi/configs:GA6_LIFERAY_HOME/osgi/configs - ${LIFERAY_BUNDLE_DIR}/portal-ext-properties:GA6_LIFERAY_HOME/portal-ext.properties ...
  • Update the liferay.home property inside portal-ext.properties file to point to GA6_LIFERAY_HOME and copy the updated file inside the bundles folder.
    liferay.home=GA6_LIFERAY_HOME
Database upgrade

The brand new Liferay container it's almost ready now, but a last step is still missing. Indeed, launching startDockerEnv will result in an exception thrown during the server startup phase: you need to upgrade your DB first!!!

This is the not so obvious task of a containerised upgrade. Normally, it would be enough to open a shell inside your bundles/tools/portal-tools-db-upgrade-client folder and type the following command

java -jar com.liferay.portal.tools.db.upgrade.client.jar

But this will not work so well in a Dockerised architecture, with Liferay and DB running inside containers. In such case, it's necessary to run the aforementioned command from inside the Liferay container. Nevertheless, in order to be able to do that, the docker-compose.yml file must be modified a bit:

  • As a first thing, it's necessary to see the bundles/tools folder inside the container. The first step is therefore to add a new bind mount to the portal container
    volumes: ... ${LIFERAY_BUNDLE_DIR}/tools:GA6_LIFERAY_HOME/tools
  • Then it's necessary to be able to execute the upgrade client inside the container. Therefore, Liferay must not start automatically when the startDockerEnv task is invoked. What is needed instead is a Liferay container that hangs forever doing nothing, so that the upgrade client can execute its tasks undisturbed. In order to achieve this goal, the following line should be added to the docker-compose.yml file
    entrypoint: "tail -f /dev/null"

Now it's time to execute the startDockerEnv task, wait for the containers to start and run the following command to execute the DB upgrade client from inside the Liferay container, where LIFERAY_CONTAINER_NAME is the name of the Liferay Docker container.

docker exec -it LIFERAY_CONTAINER_NAME java -jar GA6_LIFERAY_HOME/tools/portal-tools-db-upgrade-client/com.liferay.portal.tools.db.upgrade.client.jar

This command will start an interactive shell with the DB upgrade client. From now on, all informations reported here are perfectly valid and the upgrade process can be completed as usual.

Under some conditions, the DB upgrade client may thow a file permissions related exception at the end of the configuration phase. In such case, it's necessary to run the previous command as the root user. In order to accomplish that, this modified version of the command must be used:

docker exec -it --user="root" LIFERAY_CONTAINER_NAME java -jar GA6_LIFERAY_HOME/tools/portal-tools-db-upgrade-client/com.liferay.portal.tools.db.upgrade.client.jar Conclusions

And here it is. After the upgrade process completion, Liferay DB will be ready to support the GA6 portal. All that remains to do is to run the stopDockerEnv task, remove the additional lines from the docker-compose.yml file and restart the whole thing. Et voila! A fully upgraded GA6 containerised development environment is ready to be explored.

If you face some issues during the upgrade process, please don't be afraid to report them hereunder: I (or someone from the Liferay containerisers community) will try to help you.

Happy Liferay upgrading!!!

Iacopo Colonnelli 2018-04-07T12:40:28Z
Categories: CMS, ECM

Compile Time vs Runtime OSGi Dependencies

Liferay - Thu, 04/05/2018 - 20:01

Just a quick blog post to talk about compile time vs runtime dependencies in the OSGi container, inspired by this thread: https://web.liferay.com/community/forums/-/message_boards/view_message/105911739#_19_message_106181351.

Basically a developer was able to get Apache POI pulled into a module, but they did so by replicating all of the "optional" directives into the bnd.bnd file and eventually putting it into the bundle's manifest.

So here's the thing - dependencies come in two forms. There are compile time dependencies and there are runtime dependencies.

Compile time dependencies are those direct dependencies that we developers are always familiar with. Oh, you want to create a servlet filter? Fine, you just have a compile time dependency as expressed in a build.gradle file like:

compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"

Basically a compile time dependency is a dependency needed by the compiler to compile your java files into class files. That's really it; you need a class from, say, the XStream XOM package, well then you declare your dependency on in it your build.gradle file and your code compiles just fine.

Runtime dependencies are not as straight forward. What you find, especially when you deploy your OSGi bundles, is that you not only have your direct dependencies like the servlet spec jar or the XStream jar, but there are also indirect dependencies to deal with.

Let's take a look at XStream. If we check mvnrepository.com, these are the listed dependencies for XStream:

Group Name Version cglib cglib-nodep (optional) 2.2 dom4j dom4j (optional) 1.6.1 joda-time joda-time (optional) 1.6 net.sf.kxml kxml2-min (optional) 2.3.0 net.sf.kxml kxml2 (optional) 2.3.0 org.codehaus.jettison jettison (optional) 1.2 org.codehaus.woodstox wstx-asl (optional) 3.2.7 org.jdom jdom (optional) 1.1.3 org.jdom jdom2 (optional) 2.0.5 stax stax (optional) 1.2.0 stax stax-api (optional) 1.0.1 xmlpull xmlpull 1.1.3.1 xom xom (optional) 1.1 xpp3 xpp3_min 1.1.4c

Note that there are only two dependencies here that are not listed as optional, xmlpull and xpp3_min. These are libraries that XStream uses for lower-level XML stuff.

But what are all of these optional dependencies?

Let's pick the well-known one, Joda Time.  Joda is a date/time library that supports parsing date/times from strings and formatting strings from date/times, amongst other things. The library is marked as "optional" because you don't have to have Joda in order to use XStream.

For example, if you are using XStream to do XOM marshaling on XML that does not have dates/times, well then the code that uses Joda will never be reached. So Joda is absolutely optional, from a library perspective, but as the implementing developer only you know if you need it or not. If you have XML that does have dates/times but you don't have Joda, you'll get ClassNotFoundException errors when you hit the XStream code that leverages Joda.

When the libraries are being built, the scope used for the declared dependencies, i.e. runtime vs compile in Gradle and <optional /> tag in Maven, will translate into the "resolution:= optional" stanza in the jar's MANIFEST.MF. Depending upon how the jar is used, this extra designation can be used or it can be ignored. For example, if use the "java" command with a classpath that includes the XStream jar and your classes, Java will happily run the code whether or not Joda is required. However, if you were to try to process an XML file with a date/time stamp, you may encounter a ClassNotFoundException or the like if Joda is not available.

The OSGi container is stricter about these optional dependencies. OSGi sees that XStream may need Joda, but it cannot determine whether or not it will be needed when the bundle is resolving. This is one reason why you get the "Unresolved Requirement" error when OSGi attempts to start the bundle.

It is up to the bundle developer to know what is required and what isn't, and OSGi forces you to either satisfy the dependency (ala something like this) or excluding the package dependency by masking them out using the Import-Package declaration. If you, the developer, are using XStream, OSGi expects that you should know if you are going to need an optional dependency like Joda or not.

Now I hate picking out one example like this, but I think this is really important to point out. Yes, you can tell OSGi to also treat the dependency as optional. It will get you by the Unresolved Requirement bundle start error. The problem, however, is it leaves you open to a later ClassNotFoundException because you have a dependency on a package which is marked as optional. The last thing you want to have happen is to find your module deployed to production and fail on a sporadic basis because sometimes an XML file has a date/time to parse.

Recommendations

So, now for some recommendations...

If you have a dependency, you have to include it. I tend to use Option 4 from my blog, but I'm using the compileInclude Gradle dependecy directive to handle the grunt work for me. If your dependency has required dependencies, well you have to include them also and compileInclude should cover that for you too.

For the optional dependencies, you have to determine if you need them or not. Yes, this is some analysis on your part, but it will be the only way to ensure that your module is correctly packaged.

If you need the optional dependency, you have to include it. I will typically use the compileInclude directive explictly to ensure the dependency gets pulled into the module correctly.

If you don't need the optional dependency, then exclude it entirely from the bundle. You do this in the bnd.bnd file using the Import-Package directive like:

Import-Package: !org.codehaus.jackson, \ !org.codehaus.jackson.annotate, \ !org.codehaus.jackson.format, \ !org.codehaus.jackson.impl, \ !org.codehaus.jackson.io, \ !org.codehaus.jackson.type, \ !org.codehaus.jackson.util, \ !org.joda.time, \ !org.joda.time.format, \ *

The syntax above tells OSGi that the listed packages are not imported (because of the leading exclamation point). This is how you exclude a package from being imported.

NOTE: When using this syntax, always include that last line, the wildcard. This tells OSGi that any other non-listed packages should be imported and it will require all remaining packages be resolved before starting the bundle.

And the final recommendation is:

Do not pass the optional resolution directive on to OSGi as it may lead to runtime CNFEs due to unsatisfied dependencies.

Got questions about your dependency inclusion? Comment below or post to the forums!

David H Nebinger 2018-04-06T01:01:30Z
Categories: CMS, ECM

Experiência-Digital-do-Cliente

Liferay - Wed, 04/04/2018 - 17:28

Empresas bem-sucedidas acreditam no êxito que a experiência do cliente pode trazer mais do que nunca. De acordo com um estudo da Walker, ela se tornará o principal diferencial entre as marcas até o ano de 2020, superando preço e produto. E parte deste novo diferencial virá através da experiência digital do cliente.

A experiência digital do cliente é tão crucial quanto as interações tradicionais, cara a cara. As marcas devem considerar essas duas facetas separadamente, mas fortemente conectadas, para criar uma estratégia eficaz que agrade o público-alvo. No entanto, entender estes dois tipos de experiências pode ser confuso para aqueles que não estão completamente familiarizados com suas diferenças e semelhanças.

Definindo a Experiência do Cliente

O termo experiência do cliente (CX) é amplo no que se refere a como um consumidor se sente em relação ao seu tratamento por uma empresa. Isso inclui os aspectos tradicionais do atendimento ao cliente, como a ajuda presencial de um funcionário, bem como as interações mais recentes e baseadas no mundo digital, como os serviços de chatbot. O termo experiência digital do cliente (DCX) é focado exclusivamente nestes serviços digitais, assim como ferramentas de back-office que ajudam a tornar essas experiências online possíveis, como softwares de personalização.

Pense na experiência do cliente e na experiência digital do cliente como igualmente importantes para atender às necessidades da sua audiência e proporcionar- uma interação coesa e positiva com sua marca. Embora DCX possa ser englobada na definição mais abrangente de CX, elas são distintas uma da outra em seu modus operandi, embora tenham o mesmo objetivo de fechamento de vendas.

Experiências Diferentes Precisam de Estratégias Diferentes

É importante que as empresas entendam que os mesmos princípios usados para criar uma ótima experiência do cliente nem sempre se traduzem perfeitamente em uma ótima experiência digital do cliente. Embora o público-alvo de uma empresa possa encontrar o mesmo nível de serviço e atendimento, tanto presencialmente quanto online, as empresas devem observar que a experiência do cliente e a experiência digital do cliente não têm uma correlação direta.

Como discutido pela Harvard Business Review, grande parte da experiência do cliente tradicional depende das ações de outros clientes, do ambiente físico e da localização, o que geralmente resulta em clientes que diminuem suas expectativas em relação à experiência. No entanto, a experiência digital do cliente é puramente online e está sujeita aos altos padrões dos consumidores, incluindo o tempo de carregamento de páginas, a velocidade para encontrar os itens desejados e ter suas necessidades atendidas exatamente como eles esperam. Os clientes que estão em uma loja sabem que os funcionários estão ocupados e podem precisar aguardar para serem atendidos, mas os clientes online sentem que não há desculpa para serviços lentos ou ineficientes ao realizarem compras em seus computadores. Estas expectativas diferentes exigem estratégias distintas para serem atendidas adequadamente.

As organizações precisarão de insights valiosos sobre como os consumidores interagem com elas por meio de plataformas digitais, com o intuito de criar experiências digitais excelentes. O Gartner relata que a maneira mais comum de começar a aprimorar a DCX é melhorando a coleta e análise de feedback dos clientes. Desta forma você pode obter uma melhor compreensão do que melhorar para atender às expectativas do cliente.

O Valor da Experiência Digital do Cliente

Com a quantidade de trabalho necessária para fazer uma ótima experiência digital do cliente, as empresas podem se perguntar se vale a pena o esforço. Entretanto, quando feitas de forma eficaz, elas demonstraram ter um efeito positivo nos clientes e levar a um alto retorno do investimento. Embora o impacto da experiência digital do cliente nos lucros de uma empresa varie, as informações da Forbes mostram que a adoção mais ampla de uma interação digital (como um carrinho de compras online) gera uma receita mais alta e menor custo operacional. A pesquisa da McKinsey também mostra que as empresas que possuem maior capacidade digital podem converter as vendas a uma taxa 2,5 vezes maior do que empresas com menor capacidade digital.

Assim como a experiência do cliente tradicional, uma experiência digital do cliente ruim pode fazer com que eles recorram a um concorrente. Como discutido em um artigo no Huffington Post, 67% dos clientes colocaram más experiências como uma razão para deixar uma empresa em favor de um concorrente. Isso inclui experiências ruins online. É importante notar que, mesmo que você não esteja ouvindo reclamações diretas de clientes, estatísticas mostram que apenas 1 em cada 26 clientes insatisfeitos reclamarão de uma experiência ruim. O silêncio do seu público-alvo em relação à uma experiência digital do cliente fraca ou inexistente não significa que você não deva melhorá-lar.

O Futuro da Experiência Digital do Cliente

É importante notar que as experiências online e offline, embora ainda diferentes umas das outras, estão se tornando mais próximas do que nunca na jornada do cliente. O cliente moderno geralmente espera poder alternar entre os dispositivos enquanto estiver online, bem como iniciar, continuar ou concluir sua jornada pessoalmente sempre que desejar. Isso significa que uma empresa deve fornecer experiências digitais que possam transferir informações importantes entre dispositivos e também acessar estas informações em locais físicos para uma experiência suave e interconectada.

À medida que você gerencia e procura melhorar a atual experiência digital do cliente, tenha isso sempre em mente para que sua marca consiga acompanhar as expectativas em evolução do público moderno.

 

Isabella Rocha 2018-04-04T22:28:52Z
Categories: CMS, ECM

Thinking Outside of the Box: Resources Importer

Liferay - Wed, 03/28/2018 - 09:48
Introduction

On a project recently I had a Theme war and, like those themes you can download from the MarketPlace, I also had pages, contents and documents imported by the Resources Importer (RI) as a site template.

Which is pretty cool, on its own, so I could deploy the theme and create a new site based on the theme and demo how it looks and works.

But I ran into something that I consider a bug: every time the container restarts, the WAR->WAB processes my theme but also my Resources Importer stuff and it goes crazy creating new versions for the contents and documents, and my sites (if propagation was enabled) would start throwing exceptions about missing versions (I had developer mode enabled so the old versions were getting deleted).

I have open bugs on all of these issues, but it made me wonder what I could do with the RI to work around these issues in the interim.

So I knew that I would still want to use RI to load my assets, but I only ever want RI to load them once, and not again if the containing bundle were already deployed.

Running once and only once, as part of an initial deployment or perhaps as part of an upgrade, well I've seen that before, that's a perfect fit for an Upgrade Process implementation.

So I had an idea that I wanted to build an upgrade process that could invoke RI to import new resources. The Upgrade framework would ensure that a particular upgrade step would only run once (so I don't end up w/ weird version issues), that I could support doing version upgrades when necessary, and since I'm using RI I don't have to recreate the wheel to import resources.

What Can RI Do OOTB?

So the Resources Importer (RI) is an integrated part of Liferay 7.x CE and Liferay 7.x DXP. It is implemented in the com.liferay:com.liferay.exportimport.resources.importer bundle. RI ships with the following capabilities OOTB:

That second one was a doozy to find. It's not really documented, but it is in the code to support it.

If you trace through the code from modules/apps/web-experience/export-import/export-import-resources-importer in the com.liferay.exportimport.resources.importer.internal.extender.ResourceImporterExtender class, you will see that it has code to track all com.liferay.exportimport.resources.importer.provider.ResourceImporterBundleProvider instances. When found, the RI infrastructure will look for a liferay-plugin-package.properties file in your bundle classes which defines where to find the resources to import. So if you register a ResourceImporterBundleProvider component in your bundle, RI will load your resources from that bundle.

Now I don't know if it suffers from the same issue as the WAR->WAB reloading loop, so it might have issues on its own, but that would take some testing to find out.

The LMB message aspect can be found in the ResourceImporterExtender class.  If you don't want to use the ResourceImporterBundleProvider aspect, you could use the code in this class to initialize the bundle servlet context and send RI a hot deploy message and it will kick of the resource import (which is how the Extender class actually invokes RI, so if you can use the ResourceImporterBundleProvider component it will save you some boilerplate code).

The LAR file handling was interesting.  You can have a single LAR file as /WEB-INF/classes/resources-importer/archive.lar to load public resources.  If you have public and private or just private resources, you have to use /WEB-INF/classes/resources-importer/public.lar and/or /WEB-INF/classes/resources-importer/private.lar respectively.

Can I Do More with RI?

In short, yes. The first problem is that the Liferay APIs are not exported, so even though their bundle has the necessary classes, they are hidden away.

So in my workspace, https://github.com/dnebing/rsrc-upgrade-import, I have a module, resources-importer-api, which has copies of the classes but they are exported. Included in this module are some extension classes I created to support running RI within a bundle's UpgradeProcess.

The second module, resources-importer-upgrade-sample, is a sample bundle that shows how to build out an upgrade process that invokes the Resources Importer in upgrade processes.

The code that is checked in is configured only for bundle version 1.0.0. You can build and deploy to get the Dog articles.

Next, change the version to 1.1.0 in the bnd.bnd file and uncomment the line, https://github.com/dnebing/rsrc-upgrade-import/blob/master/modules/resources-importer-upgrade-sample/src/main/java/com/liferay/exportimport/resources/importer/sample/ResourceImporterUpgradeStepRegistrator.java#L73, build and deploy to get the Cat articles.

Next, change the version to 1.1.1 in the bnd.bnd file and uncomment the line, https://github.com/dnebing/rsrc-upgrade-import/blob/master/modules/resources-importer-upgrade-sample/src/main/java/com/liferay/exportimport/resources/importer/sample/ResourceImporterUpgradeStepRegistrator.java#L76, build and deploy to get the Elephant articles.

At each deployment, only the assets tied to the upgrade process will be processed. And if you start your version at 1.1.1 and uncomment both of the registry lines referenced above, build and deploy the first time to a clean environment, you'll see all 3 upgrade steps run in sequence, 0.0.0 -> 1.0.0, 1.0.0 -> 1.1.0, and 1.1.0 -> 1.1.1.

Since we're using an upgrade process to handle the asset deployment, the RI will only run once and only once for each version.

Conclusion

With the provided API module, there are more ways to leverage the RI. I can imagine a message queue listener that receives specially crafted messages that contain articles that transforms these into consumable RI objects and invokes RI to do the heavy lifting, invoking the RI system to properly load up the assets correctly, letting it invoke all of the necessary Liferay APIs.

Or a directory watcher that looks for files dropped in a particular folder and does pretty much the same thing.

For the record, I don't think I'd want to use this for managing the deployment of a long list of assets. I wouldn't want to use the RI as some sort of content promotion process as content creation is not a development activity and should be handled by appropriate publication tools built into the Liferay platform.

Anyways, check out the blog project repo at https://github.com/dnebing/rsrc-upgrade-import and let me know what you think...

David H Nebinger 2018-03-28T14:48:59Z
Categories: CMS, ECM

Liferay Portal 7.0 CE GA6 Release

Liferay - Tue, 03/27/2018 - 11:47

I'm pleased to announce the immediate availability of: Liferay Portal 7.0 CE GA6!


  Download Now! What’s New
  • Bug Fixes - Liferay 7 Portal CE GA6 is mainly a bug fix release and contains over 800 fixes. A complete list can be found here.

Known Issues
  • LPS-71774 - Browser button border overflow on Documents and Media
  • LPS-78897 - Announcement portlet's titles wrongly translated to Finnish and Swedish
  • LPS-78989 - Event repeat date incorrect when changing start date
  • Upgrade Process - May see message: Duplicate entry 'com.liferay.rss.web.internal.util.RSSFeed' for key 'IX_B27A301F'.  We have determined this error is non-disruptive and can safely be ignored.
Release Nomenclature

Following Liferay's version scheme established in 2010, this release is Liferay Portal 7.0 CE GA6.  The internal version number is 7.0.5 (i.e. the sixth release of 7.0).  See below for upgrade instructions from 6.1, 6.0, and 5.x.

Downloads

You can find the 7.0 release on the usual downloads page. 

Source Code

As Liferay is an open source project, many of you will want to get at its guts. The source is available as a zip archive on the downloads page, or on its home on GitHub. Many community contributions went into this release, and hopefully many more in future releases! If you're interested in contributing, take a look at our updated contribution guide.

Compatibility Matrix

Liferay Portal 7.0 CE GA6 is testing extensively against different Open Source App Server/Database server combinations.

Application Servers:
  • Apache Tomcat 8.0 with Java 8
  • Wildfly 10.0 with Java 8
Database Servers:
  • HSQLDB 2 (only for demonstration, development, and testing)
  • MySQL 5.6
  • MariaDB 10
  • PostgreSQL 9.4
Search:
  • ElasticSearch 2.4.x
Documentation

The Liferay Documentation Team has been hard at work updating all of the documentation for the new release.  This includes updated (and vastly improved/enlarged) javadoc and related reference documentation, and and updated installation and development documentation can be found on the Liferay Developer Network. Our community has been instrumental in identifying the areas of improvement, and we are constantly updating the documentation to fill in any gaps.

Bug Reporting

If you believe you have encountered a bug in the new release you can report your issue on issues.liferay.com, selecting the "7.0.0 CE GA6" release as the value for the "Affects Version/s" field.

Upgrading

The upgrade experience for Liferay 7 has been completely revamped.  There are some caveats though, so be sure to check out the Upgrade Guide on the Liferay Developer Network for more details on upgrading to 7.0.

Getting Support

Support for Liferay Portal 7.0 CE GA6 is provided by our awesome community.  Please visit our  community website for more details on how you can receive support.

Liferay and its worldwide partner network also provides services, support, training, and consulting around its flagship enterprise offering, Liferay DXP.

Also note that customers on existing releases such as 6.1 and 6.2 continue to be professionally supported, and the documentation, source, and other ancillary data about these releases will remain in place.

Kudos

Thanks to everyone in our community! It is thanks to your constant support that makes each release as great as they are!

Jamie Sammons 2018-03-27T16:47:31Z
Categories: CMS, ECM

Announcing Unconference at Liferay Symposium North America 2018

Liferay - Mon, 03/26/2018 - 11:35

We are excited to announce that, for the first time, Unconference will be coming to Liferay Symposium North America 2018 and registration is now open.

This one-of-a-kind crowdsourced conference is unlike any other. There are no sponsored pavilions, no paid speakers and no planned sessions at Unconference. Instead, this event is completely shaped by its attendees and their needs.

Previously held at Liferay DevCon for the last several years, Unconference will give LSNA attendees the opportunity to shape an entire day around the ideas they are most interested to learn more about. In addition, attendees will be able to establish themselves as sources of knowledge on various matters by providing insights about the pressing issues most affecting them today by hosting their own session.

Held on the first day of LSNA 2018, this special gathering sees the Liferay community come together to form their own conference and crowdsource the topics and presentations that they want. At the start of Unconference, a space is set up for attendees, allowing them to gather in a circle to brainstorm and build out the day’s agenda. Attendees then write down topics on cards and promote them to the crowd in order to draw interest. Finally, these session topics are coordinated into available time slots and organized to avoid topic repetition and allow attendees to join the sessions they believe are right for them.

Unconference attendees then break out into sessions throughout the day, returning as a group at the day’s close to discuss what they have learned and experienced. In the past, Unconference attendees have been thrilled to take part in an ever-changing, highly focused and deeply insightful day that addresses needs and interests in ways few other conferences can do.

While Unconference is geared toward developers, anyone can attend. In addition, because the topics and sessions are created and held by the attendees themselves, no two Unconferences are the same and are shaped by the unique crowd at every event.

This year, Unconference will take place on October 8, the first day of Liferay Symposium North America 2018 in New Orleans, LA, and is hosted by Olaf Kock. Admission is $79 and Unconference runs from 9 a.m. to 5 p.m., allowing attendees to experience a full day of crowdsourced knowledge.

In the past, Unconference admission at DevCon has sold out quickly, so make sure to purchase your ticket as soon as possible to join in on the many insights that are sure to be shared this year.

Register for Unconference Today

Click the link below to get your ticket to Unconference and take part in this one-of-a-kind event.

Purchase Your Ticket   Matthew Draper 2018-03-26T16:35:29Z
Categories: CMS, ECM

Call for Proposals for Liferay Symposium North America 2018 is Now Open

Liferay - Mon, 03/26/2018 - 11:32

Liferay Symposium North America will be taking place from October 8-10 this year in New Orleans, LA, and proposal for presentations at the event are now being accepted.

If you or your organization has developed a project using the Liferay platform or discovered something you find fascinating within the world of Liferay software development, now is your chance to share your knowledge with others. We are looking for people who are passionate about the advances they are making in digital transformation, customer experiences, vertical-specific solutions, applications and more digital innovations with Liferay.

Through a presentation at LSNA, organizations have the chance to provide insights into Liferay development that may have not been known before, as well as help establish themselves as a source of knowledge in their fields of interest. Often, these presentations give an exclusive look into the development process and the strategies used to effectively reach complex goals to better serve customers, support employees, adapt to a changing industry and more.

Past Symposiums have seen companies, partners and more organizations present insightful case studies regarding how their teams leveraged Liferay software to create innovative solutions, providing details regarding the process behind programming new applications, how technology impacted their success as a company, how they determined the software and solution type that was right for their business needs and much more. In doing so, a business can show themselves as thought leaders and innovators in their respective fields.

“Take the next step in digital innovation” is the theme of LSNA 2018. Presenters will tie their talks in with this central idea and the three supporting subpoints:

  • Be a Change Agent - IT leaders need to play active roles in identifying the right technologies, strategies and methodologies to digitally mature their businesses.
  • Turn Ideas Into Action - Success means moving forward one project at a time, using new insights and tools to translate your vision into actionable steps that deliver on key goals specific to your company, industry and audience.
  • Do It Together - Business and IT leaders can together stimulate rather than hinder digital innovation by learning to speak the same language and improving cross-functional collaboration.

LSNA presentations offer the opportunity for both developers and executives to provide insights regarding the latest in modern digital business development to a like-minded crowd. Together, the wide variety of presenters at Symposium represent leaders at the forefront of digital innovation in industries around the world.

If you are excited to share what your organization is doing with Liferay, there is no better opportunity than Liferay Symposium North America.

The deadline to submit your proposal is Friday, May 11, and those selected to speak at LSNA 2018 will be notified by June 11. Every presenter whose talk is accepted by the program committee will receive a free ticket to LSNA 2018.

Submit Your Proposal Today

Click the link below to submit your Liferay Symposium North America 2018 proposal.

Submit a Proposal   Matthew Draper 2018-03-26T16:32:00Z
Categories: CMS, ECM

ADFS Liferay DXP Integration

Liferay - Sun, 03/25/2018 - 09:50

Introduction

This blog covers Liferay DXP SP4 integration with Microsoft ADFS (2.0) through SAML 2.0 (Liferay SAML plugin 3.1.1). Please note as per new update in Liferay SAML plugin, you don't require to restart the server post any changes at Liferay end. Also, in this blog Liferay is registered as Service Provider and ADFS as Identity Provider.

This article is inspiration and collaboration of following references.

  1. https://web.liferay.com/web/a.s/blog/-/blogs/adfs-dxp
  2. https://web.liferay.com/web/sandeep.sapra/blog/-/blogs/sso-in-liferay-dxp-using-saml
  3. Liferay SAML customer documentation (only available with licensed customers)
  4. https://support.zendesk.com/hc/en-us/articles/203663886-Setting-up-single-sign-on-using-Active-Directory-with-ADFS-and-SAML-Professional-and-Enterprise-

Integration steps

  • ADFS needs to register Liferay Application first as relying party trust manually ONLY. Below are the errors when you don't enter details manually.

 

Figure-1: ADFS import metadata URL error

Figure2: Error while registering Liferay metadata in ADFS through URL

  • During manual registration you have enter Liferay's SP EntityID, Certificate properly. Once you register Liferay's SP SAML metadata, just confirm following points carefully.

Figure3: Identifiers - This should be Liferay saml metadata's "EntityID".

Figure4: Liferay by default works with SHA encryption.

Figure5: Endpoints. Remember ONLY 1 assertion and 1 logout endpoint is allowed by Liferay.

Figure6: ADFS's SAML endpoint assertion details.

Figure7: SAML logout endpoints.

  • Add following claim rules against registered relying party trust. First is LDAP claim rule and second is NameID transformation.

Figure8: LDAP attribute mapping claim rule at ADFS.

Figure9: NameID transformation claim rule.

Figure10: All claim rules at ADFS. Remember the sequence of claim rules, SAML doesn't like change in this sequence. NameID rule should always be last.

  • Now execute following 2 commands from ADFS server's powershell.
Set-AdfsRelyingPartyTrust -TargetName "www.my-site.com" -SamlResponseSignature MessageAndAssertion

Command1: This forces ADFS to sign all saml response of Liferay's Replying party trust.

set-ADFSRelyingPartyTrust –TargetName "TESTX" –EncryptClaims $False

Command2: This allows ADFS SAML response's assertion to be in decrypted form which can be by Liferay.

  • Register ADFS as Identity Provider at Liferay's SAML Admin section.

Figure11: NameID and attribute mapping at Liferay end for ADFS. Take note of Liferay attributes on right-side of equals operator.

  • Re-verify Liferay's service provider setting

Figure12: Liferay Service Provider settings

  • One last step of configuration is importing of ADFS certificate into Liferay's SAML. By default SAML's certificate is generate at /data folder of Liferay Home. Execute below command to import ADFS certificate.

keytool -importcert -alias ssoselfsigned -file sso-certificate.cer -keystore keystore.jks

  • Please remember password while importing is "liferay"
  • Since ADFS is IdP and Liferay is SP, in this scenario ADFS SSO initiated sign-in and sign-out URL should be used.

Sign-in URL: https://fs.testsso.com/adfs/ls/idpinitiatedsignon.aspx?RelayState={logged-in-page-liferay}

Sign-out URL: https://fs.testsso.com/adfs/ls/?wa=wsignout1.0

Neeraj Gautam 2018-03-25T14:50:57Z
Categories: CMS, ECM
Syndicate content