I haven’t used Adobe Bridge for quite a long time, and when I came to use it recently I found that all my hierarchical keywords had been lost. This was because I’ve had a new computer since last using it, and the location where Bridge stores the hierarchical keywords is in a location I hadn’t been backing up.
All the hierarchical keywords that had been assigned to images were still in the metadata of those images, just they weren’t in the list Bridge keeps to assign to new images. So I needed some way of pulling the hierarchical keywords from all the images and then using this to rebuild the list of keywords available in Bridge.
In this post I’ll go over how I did this. Note all my processing is done in a linux environment, but you could probably adjust my code to Windows equivalents or run it from wsl. Also note, this is not a particularly robust solution, but rather one that was ‘good enough’ for me.
Lately I’ve been working on getting a CEP panel working for reading and writing metadata to / from images in Adobe Bridge. Many years ago I did create a File Info panel using the File Info SDK with MXML and ActionScript3, but Adobe dropped compatibility with File Info Panels created that way quite a while back.
Although Adobe do still offer a File Info SDK, it seems the current recommended way to do most things in this sort of vein is with CEP. So I thought I better try creating my panel using CEP, thinking that the File Info panel may be retired soon while CEP will hopefully have a longer life.
I haven’t found it very easy, so I thought I would share some of the stuff I’ve had to work out so far. No doubt I will be posting at least one more of these as I discover more issues. The points below relate to CEP, the ExtendScript portion of CEP, and the XMP API for ExtendScript.
I was trying to set up a local (development) copy of a site I manage today, but found that I was getting a ‘Too many redirects’ error when trying to load it. Eventually I tracked it down to the WordPress redirect_canonical() function, and more specifically is_ssl().
is_ssl() was reporting false even though I was requesting the site over https. And so it was redirecting to the https URL (as this is what I have set as the siteurl in the WP options). Thus causing an infinite redirect loop.
The cause of this problem and the solution can be found here: WordPress Function Reference – is ssl. The problem was that I was using a reverse proxy setup, so the apache instance running WordPress wasn’t using https, just the nginx server handling the initial requests was.
By adding proxy_set_header X-Forwarded-Proto https; to the nginx config and then if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') { $_SERVER['HTTPS'] = 'on'; } to the wp-config.php the problem is solved.
I’d be interested to know how this is normally handled in environments using reverse proxies, as I would think many shared webhosts use this structure, but users aren’t required to add checks for the X-Forwarded-Proto header in their wp-config just to get WordPress working on https. Or are they?
Today I wanted to add the PHP EXIF extension to my local PHP installation. According to the PHP manual to do this you should configure PHP with –enable-exif. However, I didn’t want to go through the tedious process of compiling PHP from scratch again. I just wanted to add the single extension.
I couldn’t find any info on how to do this, but thankfully it is actually quite simple, it’s pretty much the same as compiling any extension that doesn’t come with PHP as standard. This same process should work with any of the extensions that ship with PHP.
I was having trouble with supervisord being unable to start nginx on a new dev vm I had setup. In the supervisor stderr log for nginx and nginx’s error.log I was getting:
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/nginx-1.9.12/conf/nginx.conf
...
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
I had an issue lately where a new subdomain I’d added for a site wasn’t accessible. Trying to debug it, when I ran nslookup sub.example.com my.webhost.dns it returned the correct IP address of the server where the subdomain was meant to be pointing. But when I ran nslookup sub.example.com 8.8.8.8 (that’s Google’s DNS server) then it couldn’t find the domain.
Eventually I tracked down the problem, and it was something very simple. The domain wasn’t actually set to use my webhost’s DNS servers. Instead I had it configured to use CloudFlare’s DNS servers.
So if you have this problem, make sure you double-check that the DNS server(s) you’re updating are actually set as the primary DNS servers for the domain. It might seem obvious, but it’s easy to overlook (at least it was for me!)
I had a problem recently where Nginx wasn’t gzipping responses, despite having the necessary lines in my nginx.conf. In looking for a solution, I found quite a few posts covering various reasons why gzip might not be working. But none that fitted my case.
So, I thought I might as well share what the problem / solution was in my case, plus the other reasons why Nginx may not be gzipping files. Hopefully this will be helpful to anyone else trying to figure out why their Nginx gzip configuration isn’t working.
If you had something like above, but your javascript was actually served with a mime type of application/javascript, then it wouldn’t be gzipped because application/javascript is not listed in the mime types you want gzipped.
So the solution here is just to ensure you include all mime types you want gzipped after the gzip_types directive.
Normally, as part of gzip configuration you will include a minimum size that the response must be for it to get gzipped. (There’s little benefit in gzipping already very small files).
gzip_min_length 1100;
It can be easy to forget this and think that gzip isn’t working, when actually it is working, it’s just that you’re checking with a small file that shouldn’t be gzipped.
This was what the problem was in my case. By default, Nginx will only gzip responses where the HTTP version being used is 1.1 or greater. This will be the case for nearly all browsers, but the problem comes when you have a proxy in front of your Nginx instance.
In my case, my webhost uses Nginx, which then proxies requests to my Nginx instance. And I’ve mirrored this setup in my development environment. The problem is that by default Nginx will proxy requests using HTTP1.0.
So the browser was sending the request using HTTP1.1, the frontend Nginx was receiving the request, then proxying it to my backend Nginx using HTTP1.0. My backend Nginx saw the HTTP version didn’t match the minimum gzip default of 1.1 and so sent back the response unzipped.
In this case you either need to update the proxy_http_version directive of the proxying server to use 1.1. Or you need to set the gzip_http_version to 1.0 in your config.
I think this is likely to be a rather unusual situation, but I found it described here: nginx gzip enabled but not not gzipping. Basically they had some security software installed on the client machine they were testing from. This software was deflating and inspecting all requests before they were sent on to the browser.
The same thing could happen if there was a proxy between you and the server that deflates any gzipped responses before sending them on to you. But I think it would be very rare to have proxy configured like that.
There could also be other reasons why Nginx might not be gzipping responses. For example, it could be you’re using a gzip_disable directive that matches. Or you have gzip off; somewhere later in your config. But I think the items above are likely to be the main reasons why Nginx isn’t (or looks like it isn’t) gzipping files when it should be.
Recently I’ve been working on a widget that makes use of this hack using animation events as an alternative to DOM Mutation events. The nice thing about this method is that it lets you add the event listener on the element you want to get the ‘node inserted’ event for. Whereas with DOM mutation events, you must add the listener to the parent node. In cases where you don’t know where the node will be inserted, this means attaching the mutation listener to the body, and you have to filter all mutation events to try and find the one for your element. With the animation event method you don’t have that problem.
Anyway, to get on to the main point of this post, I was having a big problem with my widget working fine in all browsers (that support CSS3 animations) apart from MS Edge. It seemed very strange that something working in older IEs would not work in Edge. The problem was that the animation event was never being fired when the node was inserted. But when I tried the jsFiddle example from the backalleycoder post, that worked fine in Edge.
After much debugging, I found the issue. I had my keyframes like this:
@keyframes nodeInserted {
from {
outline-color: #000;
}
to {
outline-color: #111;
}
}
@-moz-keyframes nodeInserted {
}
@-webkit-keyframes nodeInserted {
from {
outline-color: initial;
}
to {
outline-color: initial;
}
}
@-ms-keyframes nodeInserted {
from {
outline-color: #000;
}
to {
outline-color: #111;
}
}
@-o-keyframes nodeInserted {
from {
outline-color: #fff;
}
to {
outline-color: #000;
}
}
Initially I had the unprefixed @keyframes empty, but when playing with the jsFiddle example I found MS Edge didn’t like an empty @keyframes, nor did it like @keyframes changing the values from initial to initial. The problem with my CSS was that after defining the unprefixed @keyframes in a format Edge will fire an animation event for, I then have a webkit prefixed @keyframes using the initial values it doesn’t like.
MS Edge was picking up the webkit prefixed @keyframes, and using this as the value, since it comes later in the stylesheet than the unprefixed version. So the solution was simply to move the unprefixed @keyframes down to the bottom.
It seems a bit silly that MS Edge will pick up the webkit prefixed declaration, but doesn’t pick up the later ms prefixed declaration. But I guess that’s the kind of weirdness you come to expect from MS.
This foxed me for quite a while, so I hope this helps anyone else coming across the same problem.
I’m not particularly knowledgable on the subject of optimising SQL queries, so the easiest way to optimise a query for me is to write a few variations then test them against one another. To this end I’ve developed a PHP class to do the testing and benchmarking. I think that even if I was highly knowledgable about optimising queries, then I would still want to test my queries to ensure that my theory held true in practice.
For a useful benchmark you need to execute the queries using a range of data that simulates the real data the queries would be executed with. They also need to be executed in a random order and multiple times, to ensure results can be averaged and reasonably reliable. That’s what this class does, along with providing a summary of the results in CSV format.
It should be noted that this class does not set up or modify any tables for testing with – it just allows you to supply a range of data to be included within the queries themselves, such as testing with a range of different values in a WHERE clause.
I decided to update PHP, and had a few problems compiling its dependencies. So I thought I’d share the problems I had and the solutions here for future reference, and maybe they might help someone else as well.