Skip to main content

· 5 min read

If you build a default monolithic application JHipster with Angular and OAuth2 as your type of authentication using Keycloak, everything works apparently well. However, if you want to separate back-end from front-end (for that, you need to set SERVER_API_URL variable defined in the webpack.commons.js, as it is mentioned here), you will get several errors in the console of your web browser, such as,

Access to XMLHttpRequest at 'http://localhost:8080/oauth2/authorization/oidc' (redirected from 'http://localhost:8080/api/account') from origin 'http://localhost:9000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.

· 5 min read

As you can see on its web page, Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes runs anywhere Linux does: your laptop, globally distributes data centers, major cloud providers, and so on. Stating from 5th June 2018 AWS EKS is generally available. K8S also runs on Google Cloud Platform (GCP) through its Google Container Engine (GKE).

If you want K8S in your workstation, the solution is Minikube. Minikube allows you to run the actual K8S code locally on your machine, avoiding the complexity, expense and slower response of a remote cluster; it is well supported for development and testing; it is not a prodution technology and cannot relied upon for production workloads.

· 5 min read

Enterprises produce a huge amount of data from a variety of sources, such as, for instance, sensors. Sensors are used to measure different physical characteristics of a machine, zone, etc., e.g., pressure, ph, temperature and so on. These sensors provide sensor data tags as time series data. In this post, we will save data in InfluxDB, an open-source time series database (TSDB),  using an Apache NiFi dataflow.

· 11 min read

Testing Liferay plugins is not an easy task. Nowadays, integration tests in Liferay are easier using Arquillian. Arquillian is an innovative and highly extensible testing platform for the JVM that enables developers to easily create automated integration, functional and acceptance tests for Java middleware. In this post we are going to write the steps you need to follow in order to Arquillian works and we will also describe some of the errors that you could find.

· 8 min read

If you need to extract a sizeable amount of data from a Microsoft Content Management Server 2002 (MCMS) database you have two options:

  1. Using CMS 2002 API (PAPI)
  2. Interacting directly with the CMS 2002 database

Although the second option it is totally discouraged due to the fact that the database schema is not published and CMS server executes complex procedures against this database, we are going to use it, but we are only going to do simple read-only queries against the CMS database. Under no circumstances should write operations be performed directly against the database. The PAPI is the appropriate interface for writing to the CMS database.

· 3 min read

Android Studio can directly work with Github, but not with Bitbucket. Instructions for pushing an project into a Bitbucket repository for the first time are very confusing, and this process can become very frustrating if the right steps are not followed. Basically, you have to know that your first commit must be manually executed instead of using an option from the main menu or from popup menus.

In this post, we are going to use Android Studio version 1.4.

· 7 min read

Apache Spark is a fast and general engine for large scale data processing. It is written in Scala, a functional programming language that runs in a JVM. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. You can use Spark through Spark Shell for learning or data exploration (in Scala or Python, and since 1.4, in R) or through Spark Applications, for large scale data processing (mainly in Python, Scala or Java).

· 6 min read

Apache Hadoop is an open source software framework for storage and large scale processing of data-sets on clusters of commodity hardware. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

Hadoop components

Hadoop is divided into two core components

  • HDFS: a distributed file system;
  • YARN: a cluster resource management technology.