Adding Apache/Httpd environment variables for PHP/Web on RHEL/CentOS

This is related to PDO Informix.

I need to run CLI and web page at the same time. PDO Informix look for Informix SDK path (strangely); I think it is because it uses SDK libraries instead of importing all the functions in PDO itself.

Creating Apache Path info can be kind of tricky. This solution may work only for me, though.

#1. Add a path information for all users. It is for the PHP CLI. From web server, this is not visible.

$ cd /etc/profile.d
$ vi informix.sh
And add the following lines:
export INFORMIXDIR=”/opt/IBM/informix/4.10″
export PATH=$PATH:$INFORMIXDIR

#2. For Apache env variables, RHEL/CentOS provides /etc/sysconfig/httpd
$ vi httpd
And add the following line:
INFORMIXDIR=”/opt/IBM/informix/4.10″

Restart Apache server.

When I looked for the information, many people indicates that I need to export this value. I tried it but doesn’t work for me.
Also many resources point to SetEnv in httpd.conf. It shows the variables phpinfo() but it doesn’t actually work.

Installing PHP PDO Informix on RHEL/CentOS

When I install PHP, I realized I applied the following command.
$ yum install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo

On a CentOS, Red Hat Enterprise Linux or Fedora machine and their derivitives, this is a simple as running this command, either using sudo or as the root user:
$ yum install php-pear
The pecl command can then be found at /usr/bin/pecl. On other Linux distributions it should be a similar process.

It doesn’t mean you’re not read to compile. Then again, here’s the issue.
$ yum install php-devel

1) download last version of PDO
$ mkdir pdo
$ cd pdo
$ wget http://pecl.php.net/get/PDO_INFORMIX-1.3.3.tgz

2) uncompress
$ tar zxf PDO_INFORMIX-1.3.3.tgz
$ cd PDO_INFORMIX-1.3.3/

3) set your INFORMIXDIR
$ export INFORMIXDIR=/opt/IBM/informix/4.10

3) execute phpize
$ phpize
Configuring for:
PHP Api Version: 20090626
Zend Module Api No: 20090626
Zend Extension Api No: 220090626
configure.in:3: warning: prefer named diversions
configure.in:3: warning: prefer named diversions

3) execute the ./configure
$ ./configure
configure: loading site script /usr/share/site/x86_64-unknown-linux-gnu
checking for grep that handles long lines and -e… /usr/bin/grep
checking for egrep… /usr/bin/grep -E
checking for a sed that does not truncate output… /usr/bin/sed
checking for gcc… gcc

Add a module file under
$ cd /etc/php.d
$ vi 30-pdo_informix.ini
(Prefix number is optional)
And add the line:
extension=pdo_informix.so

Restart Apache server.

Installing Informix Client SDK on RHEL/CentOS

As for Informix user, you have to jump through a lot of hoops since it has become a dinosaur and due to the lack of support from IBM.

I prefer PHP to Perl in writing a shell program and connecting Informix was full of wonder. I still don’t understand why IBM keeps insisting that a simple Informix db library has to look for SDK. It is just heart-wrenching but it is what it is, I guess.

First off, installing Informix client sdk itself turned out to be pretty daunting task. Nothing is out of box but I wasn’t prepared enough.

Downloading Informix Client SDK from IBM website (it’s not easy to navigate to find what you need — this is another challenge).

Here’s the issues I faced:
Exception “JRE libraries are missing or not compatible” occurs while installing Content Platform Engine 5.2.1 on Linux

Preparing to install…
Extracting the JRE from the installer archive…
Unpacking the JRE…
Extracting the installation resources from the installer archive…
Configuring the installer for this system’s environment…
Launching installer…
JRE libraries are missing or not compatible….
Exiting….

It is casued by ld-linux.so.2 library is missing on Linux.
To resolve The Problem, install it with following command:
$ yum install ld-linux.so.2

And another cause is insufficient permissions in the /tmp directory. In environments where obtaining the required permissions may not be straightforward due to how the server is locked down, security policies, etc., there is a simple workaround. You need to create a new “temp” directory in a location where you do have the proper permissions.

Example:
$ mkdir /opt/informix/tmp
$ export IATEMPDIR=/opt/informix/tmp

(You have to apply this command if you terminate the session).

Thanks to this link: https://www.coreblox.com/blog/2018/2/ca-access-gateway-install-error-jre-libraries-are-missing-or-not-compatible

I faced additional issue with another server:

“One or more prerequisite system libraries are not installed on your computer.
Install libncurses.so.5 and then restart the IBM Informix installation
program.”

In this case, you need to install
$ yum install libncurses*

JavaScript to Shuffle a Deck of Cards

It was just another day. I came across this brain teaser and I decided to do it in JS. Namely, a javascript function to shuffle a deck of cards. I express the number as in an array.

const leaf = 52; // count of cards
var deck = [];
let j = Math.floor( (Math.random() * leaf) % leaf);

for (i=1; i<=leaf; i++) {
  while (deck[j]) {
    j = Math.floor( (Math.random() * leaf) % leaf);
  }
  deck[j] = i;
}
console.log( deck );

PHP Array Reverse by array_walk() (Without Loops)

I’ve procrastinated writing the following function over 11 years. It is simple, though; I have a back story about it. Thanks to that, I have been hanging onto it although it’s so simple. This will load off my mind. It’s better late than 11 year sorry.

$arr_str = ['a', 'b', 'c', 'd'];
$arr_rev = [];
$arr_len = count($arr_str);
array_walk($arr_str, function($item, $key) use (&$arr_rev, &$arr_len) {
  $arr_rev[--$arr_len] = $item;
});
ksort($arr_rev);

Drupal 8 search with Elasticsearch

  • Install module
  • Create an index on Elasticsearch engine
  • Create a view
  • Attach facet filters

The Search API module needs Elasticsearch PHP library which provides the abstract layer of Elasticsearch Connector module in Drupal. This can be installed through composer.

$ composer require nodespark/des-connector:5.x-dev

$ composer update

Add Elasticsearch

Go to Configuration > Search and metadata > Elasticsearch Connector.

Click “Add Cluster” and configure the Server.

 

Go to Configuration > Search and metadata > Elasticsearch Connector. Click “Add Cluster” and configure the server.

As default, it is “elasticsearch.” If you want to edit the cluster/node information, edit elasticsearch.yml file.

 

Go to  Configuration > Search and metadata > Search API.

Click “Add Index”

 

 

Selecting the “Content” data source, options are presented to select which bundles are to be indexed

 

Before search can be performed, select all the fields that should be available to search. That is configured in the “Fields” tab.

 

 

Last step is to add additional

’processors’.

This includes items such as:

  • Content access
  • Ignore case (case-insensitive search)
  • Tokenizer (split into individual words)

 

Once fields and processors are set up, go back to

the ”View” tab. It will show the status of the index, and at this point, the content is ready to be indexed if not already set to index immediately when the index is created.

Indexing of content is done via cron and any new

content will get indexed then.

 

  1. Go to Structure > Add view
  2. Provide a view name and select your index name as the view source
  3. Under Format > Show, select “Rendered Entity”

Or, you can select “Fields” and add each field you would like to display  in the Fields section.

  1. Under Filter Criteria, add “Fulltext search” field and expose the field for filtering
  2. Add Sort Criteria: The best one to use is “Relevance (desc)”

With the search page setup now, we want to add facets to let users filter down content. Navigate to Configuration > Search and metadata > Facets then click “Add facet”

 

Last step, place the newly created Facet blocks on the Block Layout page

 

 

  • The Elastic Stack (Elasticsearch, Logstash, and Kibana) can interactively search, discover, and analyze to gain insights that improve the analysis of time-series data.
  • No need for upfront schema definition. Schema can be defined per type for customization of indexing process.
  • Has an edge in the cloud environment – this is depend upon SolrCloud advancement.
  • Has advantages of search for enterprise or higher-ed level where analytics plays a bigger role.

Error in Apache Nutch Indexing for Elasticsearch

Even after applying the following command, there is no indexes in the Elastic search.

$ bin/nutch index elasticsearch -all

The logs/hadoop.log displays the following. It looks as if there was not any issues in completing indexing work.

2017-08-18 11:29:59,542 INFO  elasticsearch.plugins – [Behemoth] loaded [], sites []2017-08-18 11:29:59,542 INFO  elasticsearch.plugins – [Behemoth] loaded [], sites []2017-08-18 11:29:59,564 INFO  client.transport – [Behemoth] failed to get node info for [#transport#-1][BAS2019][inet[localhost/127.0.0.1:9300]], disconnecting…org.elasticsearch.transport.NodeDisconnectedException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] disconnected2017-08-18 11:29:59,565 INFO  indexer.IndexingJob – IndexingJob: done.2017-08-18 11:32:23,894 INFO  indexer.IndexingJob – IndexingJob: starting2017-08-18 11:32:24,048 WARN  util.NativeCodeLoader – Unable to load native-hadoop library for your platform… using builtin-java classes where applicable2017-08-18 11:32:24,123 INFO  basic.BasicIndexingFilter – Maximum title length for indexing set to: 1002017-08-18 11:32:24,123 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.basic.BasicIndexingFilter2017-08-18 11:32:24,125 INFO  anchor.AnchorIndexingFilter – Anchor deduplication is: off2017-08-18 11:32:24,125 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.anchor.AnchorIndexingFilter2017-08-18 11:32:24,129 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.metadata.MetadataIndexer2017-08-18 11:32:24,319 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.more.MoreIndexingFilter2017-08-18 11:32:25,099 WARN  conf.Configuration – file:/tmp/hadoop-xxx/mapred/staging/xxx486819994/.staging/job_local486819994_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.2017-08-18 11:32:25,101 WARN  conf.Configuration – file:/tmp/hadoop-xxx/mapred/staging/xxx486819994/.staging/job_local486819994_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.2017-08-18 11:32:25,161 WARN  conf.Configuration – file:/tmp/hadoop-xxx/mapred/local/localRunner/xxx/job_local486819994_0001/job_local486819994_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.2017-08-18 11:32:25,162 WARN  conf.Configuration – file:/tmp/hadoop-xxx/mapred/local/localRunner/xxx/job_local486819994_0001/job_local486819994_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.2017-08-18 11:32:25,236 INFO  indexer.IndexWriters – Adding org.apache.nutch.indexwriter.elastic.ElasticIndexWriter2017-08-18 11:32:25,348 INFO  elasticsearch.plugins – [Lin Sun] loaded [], sites []2017-08-18 11:32:25,956 INFO  client.transport – [Lin Sun] failed to get node info for [#transport#-1][BAS2019][inet[localhost/127.0.0.1:9300]], disconnecting…org.elasticsearch.transport.NodeDisconnectedException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] disconnected2017-08-18 11:32:25,962 INFO  basic.BasicIndexingFilter – Maximum title length for indexing set to: 1002017-08-18 11:32:25,962 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.basic.BasicIndexingFilter2017-08-18 11:32:25,962 INFO  anchor.AnchorIndexingFilter – Anchor deduplication is: off2017-08-18 11:32:25,962 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.anchor.AnchorIndexingFilter2017-08-18 11:32:25,962 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.metadata.MetadataIndexer2017-08-18 11:32:25,963 INFO  indexer.IndexingFilters – Adding org.apache.nutch.indexer.more.MoreIndexingFilter2017-08-18 11:32:25,992 INFO  elastic.ElasticIndexWriter – Processing remaining requests [docs = 0, length = 0, total docs = 0]2017-08-18 11:32:25,992 INFO  elastic.ElasticIndexWriter – Processing to finalize last execute2017-08-18 11:32:26,190 INFO  indexer.IndexWriters – Adding org.apache.nutch.indexwriter.elastic.ElasticIndexWriter2017-08-18 11:32:26,190 INFO  indexer.IndexingJob – Active IndexWriters :ElasticIndexWriter elastic.cluster : elastic prefix cluster elastic.host : hostname elastic.port : port  (default 9300) elastic.index : elastic index command  elastic.max.bulk.docs : elastic bulk index doc counts. (default 250)  elastic.max.bulk.size : elastic bulk index length. (default 2500500 ~2.5MB)

2017-08-18 11:32:26,201 INFO  elasticsearch.plugins – [Cloud 9] loaded [], sites []2017-08-18 11:32:26,221 INFO  client.transport – [Cloud 9] failed to get node info for [#transport#-1][BAS2019][inet[localhost/127.0.0.1:9300]], disconnecting…org.elasticsearch.transport.NodeDisconnectedException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] disconnected2017-08-18 11:32:26,222 INFO  indexer.IndexingJob – IndexingJob: done.

Then I looked at Elasticsearch through Kibana. But it couldn’t find any indexes posting from Nutch. This was a little concerning since I didn’t know where it breaks. I finally found the reason in the Elasticsearch log (under /usr/local/var/log/elasticsearch).

It says,

java.lang.IllegalStateException: Received message from unsupported version: [1.0.0] minimal compatible version is: [5.0.0] at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1379) ~[elasticsearch-5.5.1.jar:5.5.1]

It was obviously a version compatibility issue. This issue was addressed but there is no comment about on Apache Nutch project websites.

https://issues.apache.org/jira/browse/NUTCH-2323

I think Nutch works great, though. The indexer only supports Elasticsearch 1.x or 2.x (as I write this). This seems to be a deal breaker in considering Nutch since our main search engine is set as Elasticsearch.

I’m looking into StormCrawler.

PhpStorm: File Watcher set up for SASS

The PhpStorm file watcher set up can be kind of tricky. But it’s definitely good to have it since we don’t have to be dependent upon CodeKit, Koala, etc.

Here’s an example screen shots for the file watcher for PhpStorm. Before you use it, make sure to install Compass SASS compiler. First off, go to Preferences > Tools > File Watchers, and then click add (+) button. (NOTE: It is based on Mac.)

The example is not showing how to set up two individual css file location within a project. You can achieve this to create a scope. This is useful when Layout and Theme in Drupal use different CSS placeholder.

file_watch_themes

If you use ./css folder as a CSS placeholder across the project, no need the following set up.

file_watcher_layouts