hello @VirtuBox
I just created a new server and migrate the sites to it.
analyzing the site that has traffic I realized that it has a high result of cache = bypass

using the command curl: (domain changed)

curl -sLI domain.tld
HTTP / 1.1 301 Moved Permanently
Server: nginx
Date: Fri, 20 Mar 2020 17:28:02 GMT
Content-Type: text / html
Content-Length: 162
Connection: keep-alive
Location: https: // domain /
X-Powered-By: WordOps
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode = block
X-Content-Type-Options: nosniff
Referrer-Policy: no-referrer, strict-origin-when-cross-origin
X-Download-Options: noopen

HTTP / 2 200
server: nginx
date: Fri, 20 Mar 2020 17:28:02 GMT
content-type: text / html; charset = UTF-8
vary: Accept-Encoding
link: <https: // domain / wp-json />; rel = "https://api.w.org/"
link: <https: // domain />; rel = shortlink
x-powered-by: WordOps
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode = block
x-content-type-options: nosniff
referrer-policy: no-referrer, strict-origin-when-cross-origin
x-download-options: noopen
strict-transport-security: max-age = 31536000; includeSubDomains; preload
x-fastcgi-cache: MISS

images of google chrome web master and nginx vts:

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic

# wo info

NGINX (1.16.1 ):

user www-data
worker_processes auto
worker_connections 50000
keepalive_timeout 8
fastcgi_read_timeout 300
client_max_body_size 100m
allow 127.0.0.1 ::1

PHP (7.2.28-3):

user
expose_php Off
memory_limit 128M
post_max_size 100M
upload_max_filesize 100M
max_execution_time 300

Information about www.conf
ping.path /ping
pm.status_path /status
process_manager ondemand
pm.max_requests 1500
pm.max_children 50
pm.start_servers 10
pm.min_spare_servers 5
pm.max_spare_servers 15
request_terminate_timeout 300
xdebug.profiler_enable_trigger off
listen php72-fpm.sock

Information about debug.conf
ping.path /ping
pm.status_path /status
process_manager ondemand
pm.max_requests 1500
pm.max_children 50
pm.start_servers 10
pm.min_spare_servers 5
pm.max_spare_servers 15
request_terminate_timeout 300
xdebug.profiler_enable_trigger on
listen 127.0.0.1:9172

PHP (7.3.15-3):

user
expose_php Off
memory_limit 128M
post_max_size 100M
upload_max_filesize 100M
max_execution_time 300

Information about www.conf
ping.path /ping
pm.status_path /status
process_manager ondemand
pm.max_requests 1500
pm.max_children 50
pm.start_servers 10
pm.min_spare_servers 5
pm.max_spare_servers 15
request_terminate_timeout 300
xdebug.profiler_enable_trigger off
listen php73-fpm.sock

Information about debug.conf
ping.path /ping
pm.status_path /status
process_manager ondemand
pm.max_requests 1500
pm.max_children 50
pm.start_servers 10
pm.min_spare_servers 5
pm.max_spare_servers 15
request_terminate_timeout 300
xdebug.profiler_enable_trigger on
listen 127.0.0.1:9173

MySQL (10.3.22-MariaDB) on localhost:

port 3306
wait_timeout 60
interactive_timeout 28800
max_used_connections 6
datadir /var/lib/mysql/
socket /var/run/mysqld/mysqld.sock
my.cnf [PATH] /etc/mysql/conf.d/my.cnf

# wo -v
WordOps v3.11.4
Copyright (c) 2019 WordOps.

Bypass means things that have been specifically coded to NOT cache (such as logged in users). You can check your access logs to find the actual requests to see what they are.

I always look at the log, but I followed its suggestion and looked again.
found the reason for the status: bypass
are the parameters generated by google ads /?gclid that are not added to the cache.

now i have to figure out how to add /?gclid to the cache rule, possibly this as a default to deny this type of parameter to the cache.

I use a workaround. Place the following to /var/www/domain.dom/conf/nginx/no-click-id.conf:

if ($request_uri ~ "([^\?]*)\?(.*)fbclid=([^&]*)&?(.*)") {
    set $original_path $1;
    set $args1 $2;
    set $unwanted $3;
    set $args2 $4;
    set $args "";

    rewrite ^ "${original_path}" permanent;
}

if ($request_uri ~ "([^\?]*)\?(.*)gclid=([^&]*)&?(.*)") {
    set $original_path $1;
    set $args1 $2;
    set $unwanted $3;
    set $args2 $4;
    set $args "";

    rewrite ^ "${original_path}" permanent;
}

This works fine for my needs, but you know "if is evil". It checks for the fbclid or gclid parameters, and redirects the request to the same URL without the parametr.

Perhaps you or someone else can find the correct way to implement this.

    Thank you @portofacil, this worked perfectly for me.
    I was following the nginx documentation but it was not succeeding.

    I also use wprocket, it is possible to add gclid as a rule in wprocket, but for some reason wprocket did not handle the rule and did not work.

      a year later

      JuanMaia WP Rocket ignores query strings from fbclid and gclid, mostly all the known ones.

      2 years later

      Hello portofacil

      On woocommerce site, if there is item in cart, the pages will not be cached when a user view other pages on site, and it will be quite delay in this situation.

      Is there anyway to set pages to be cached if there is item(s) in cart without issue please?

      Thanks

        alexlii1971 I don't know. I don't use WooCommerce, and my clients who do use it do not use any cache engine on their stores.

        a year later

        Hi, I had a similar issue with gclid, gbraid, or other tracking parameters that are unique for each user for tracking purposes.

        For example:

        domain.com/?gclid=d189h9d8h1892819dh198289ashdakjsdh8912721
        domain.com/mypage/?gclid=d189h9d8h1892819dh198289ashdakjsdh8912721
        domain.com/my-page-2/?gclid=d189h9d8h1892819dh198289ashdakjsdh8912721

        These URLs were missing the fastcgi cache instead of serving the cached page:

        domain.com/
        domain.com/mypage/
        domain.com/my-page-2/

        To fix this under my current installation of WordOps v3.21.3 on Ubuntu 20.04.6 LTS:

        in each step run nginx -t to check your syntax and when you are done service nginx reload

        1. Edit /etc/nginx/conf.d/map-wp-fastcgi-cache.conf and add the "gclid" parameter.
        
        '# Cache requests with query strings related to analytics
        map $args $args_to_cache {
            default 0;
            "~*utm_" 1;
            "~*fbclid" 1;
            "~*gclid" 1;
        }
        1. Then, add the following to the bottom of the file:
        
        map $request_uri $cleaned_request_uri {
            ~^(.*)\?.*gclid=.*$ $1; # Remove fbclid parameter if present
            ~^(.*)\?.*fbclid=.*$ $1; # Remove fbclid parameter if present
            default $request_uri;    # Default to original request URI
        }
        
        // to work with utm_* needs somo work but also works   ~^(.*)\?(.*&)?(utm_[^&]*)=[^&]*(?:&(.*))?$ $1?$2$4; # Removes only parameters that start with "utm_"
        1. Now edit /etc/nginx/conf.d/fastcgi.conf:

        Comment out:

        #fastcgi_cache_key "$scheme$request_method$host$request_uri";

        Add:

        fastcgi_cache_key "$scheme$request_method$host$cleaned_request_uri";
        fastcgi_param REQUEST_URI $cleaned_request_uri; # Add this line above
        fastcgi_param SERVER_NAME $http_host;
        1. Finally, you have two options: either add a new file under /var/www/domain.com/conf/nginx/cache.conf or edit common/wpfc-php82.conf (note that it will be overwritten when you update WO).

        try_files $cleaned_request_uri $uri $uri/ /index.php$is_args$args;

        Now, fastCGI will always hit the cache if any parameter (gclid or fbclid) is present. It's kind of a workaround, but it works for me.

        However, you can always replace the directives in step 2 with:

        ~^(.*)\?(.*&)?(fbclid|gclid)=[^&]*(?:&(.*))?$ $1?$2$4; # Removes any of the specified parameters

        In each step run nginx -t to check your syntax and when you are done service nginx reload

        Hosted by VirtuBox