Skip to content

[vcl] Don't touch the accept-encoding header #28894

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

gquintard
Copy link
Contributor

It's fairly useless to change the accept-encoding (AE) header on the client
side for three reasons:

  • the client knows best what it's capable of, and varnish should try to
    cater to the original intent
  • the AE header is overwritten before entering vcl_backend_fetch, so
    editing it on the client side as no impact on the backend transaction
  • varnish always asks for a compressed version to the backend, and what
    the backend replies dictates what varnish replies to the users. If the
    backend responded with uncompressed data, varnish understands it
    shouldn't try to compress the data at all

note: all versions have been changed for the sake of consistency
but both the 4.x and 5.x series have been EOL'd a (long) while ago and users
should be encouraged to upgraded as soon as possible.

Contribution checklist (*)

  • Pull request has a meaningful description of its purpose
  • All commits are accompanied by meaningful commit messages
  • All new or changed code is covered with unit/integration tests (if applicable)
  • All automated tests passed successfully (all builds are green)

It's fairly useless to change the accept-encoding (AE) header on the client
side for three reasons:
- the client knows best what it's capable of, and varnish should try to
  cater to the original intent
- the AE header is overwritten before entering vcl_backend_fetch, so
  editing it on the client side as no impact on the backend transaction
- varnish always asks for a compressed version to the backend, and what
  the backend replies dictates what varnish replies to the users. If the
  backend responded with uncompressed data, varnish understands it
  shouldn't try to compress the data at all

note: all versions have been changed for the sake of consistency
but both the 4.x and 5.x series have been EOL'd a (long) while ago and users
should be encouraged to upgraded as soon as possible.
@m2-assistant
Copy link

m2-assistant bot commented Jun 25, 2020

Hi @gquintard. Thank you for your contribution
Here is some useful tips how you can test your changes using Magento test environment.
Add the comment under your pull request to deploy test or vanilla Magento instance:

  • @magento give me test instance - deploy test instance based on PR changes
  • @magento give me 2.4-develop instance - deploy vanilla Magento instance

❗ Automated tests can be triggered manually with an appropriate comment:

  • @magento run all tests - run or re-run all required tests against the PR changes
  • @magento run <test-build(s)> - run or re-run specific test build(s)
    For example: @magento run Unit Tests

<test-build(s)> is a comma-separated list of build names. Allowed build names are:

  1. Database Compare
  2. Functional Tests CE
  3. Functional Tests EE,
  4. Functional Tests B2B
  5. Integration Tests
  6. Magento Health Index
  7. Sample Data Tests CE
  8. Sample Data Tests EE
  9. Sample Data Tests B2B
  10. Static Tests
  11. Unit Tests
  12. WebAPI Tests

You can find more information about the builds here

ℹ️ Please run only needed test builds instead of all when developing. Please run all test builds before sending your PR for review.

For more details, please, review the Magento Contributor Guide documentation.

@gquintard
Copy link
Contributor Author

@magento run all tests

1 similar comment
@gquintard
Copy link
Contributor Author

@magento run all tests

@gquintard
Copy link
Contributor Author

is there anything to do here? one functional test, but it doesn't to be related to this PR?

@ihor-sviziev ihor-sviziev self-assigned this Aug 5, 2020
@ihor-sviziev ihor-sviziev added Auto-Tests: Not Required Changes in Pull Request does not require coverage by auto-tests Award: category of expertise labels Aug 5, 2020
@ihor-sviziev ihor-sviziev added the Severity: S3 Affects non-critical data or functionality and does not force users to employ a workaround. label Aug 5, 2020
@magento-engcom-team
Copy link
Contributor

Hi @ihor-sviziev, thank you for the review.
ENGCOM-7952 has been created to process this Pull Request

@sidolov sidolov added the Triage: Dev.Experience Issue related to Developer Experience and needs help with Triage to Confirm or Reject it label Sep 2, 2020
@sidolov sidolov added Priority: P4 No current plan to fix. Fixing can be deferred as a logical part of more important work. Risk: medium labels Sep 10, 2020
@sidolov
Copy link
Contributor

sidolov commented Sep 10, 2020

Risk: medium - reverting the changes may cause the issue described in ticket: #1575

@gquintard
Copy link
Contributor Author

@sidolov, indeed, that could be an issue that I missed. It really hinges on one question: do magento ever reply with vary: accept-encoding?

@gquintard
Copy link
Contributor Author

@sidolov let's drop that one, I don't know enough about Magento to guarantee it's safe.

But the current code is still suboptimal, storing multiple versions of one object.

I'll close this for now

@gquintard gquintard closed this Sep 28, 2020
@m2-assistant
Copy link

m2-assistant bot commented Sep 28, 2020

Hi @gquintard, thank you for your contribution!
Please, complete Contribution Survey, it will take less than a minute.
Your feedback will help us to improve contribution process.

@gquintard
Copy link
Contributor Author

Hi, it's me again!

So, it turns out that the vary handling isn't an issue at all because Varnish special-cases the accept-encoding header when processing it. So the PR is very valid, removing the code lets the backend decide about compression, and Varnish will just follow suit.

And so I reviewed #1575 again with that new knowledge and while it meant well, it was unnecessary.

@gquintard
Copy link
Contributor Author

Happy new year everyone!
Is there anything I can do to help getting this PR merged?

@magento-engcom-team
Copy link
Contributor

Hi @ihor-sviziev, thank you for the review.
ENGCOM-8595 has been created to process this Pull Request

@gquintard
Copy link
Contributor Author

hi, we are almost at the one-year anniversary of that PR, is there something I can do to make it happen?

@ihor-sviziev
Copy link
Contributor

@gquintard, unfortunately, this PR has low priority and will be processed after all high priority ones

@hostep hostep mentioned this pull request Nov 25, 2022
5 tasks
@jonashrem
Copy link
Contributor

I think this will break ESI, when the backend is supporting brotli.

@gquintard
Copy link
Contributor Author

Varnish clears the accept-encoding header in Bereq, so all the current logic is wasted.

@gquintard
Copy link
Contributor Author

Varnish clears the accept-encoding header in Bereq, so all the current logic is wasted.

PROTIP: don't answer comments early in the morning, otherwise you'll have to correct yourself. Anyway...

The fully accurate statement is: When caching, Varnish forces bereq.http.accept-encoding to gzip, and when not caching it just uses the request header verbatim.

A test is usually more convincing, so here we go:

varnishtest "accept-encoding"

# dumb backend, reply with an empty response every time
server s1 {
        loop 40 {
                rxreq
                txresp
        }
} -start

# simple vcl, the only fancy thing is that it copies the accept-encoding header
# sent to the backend into the response
varnish v1 -vcl+backend {
	sub vcl_backend_response {
		set beresp.http.ae = bereq.http.accept-encoding;
	}
} -start

# the client will be doing all the checking
# we just need to make sure the URL is always different to always go to the
# backend
client c1 {
	# varnish systematically set the header to gzip if it tries to cache
	txreq -url /first
	rxresp
	expect resp.http.ae == "gzip"

	txreq -url /second -hdr "accept-encoding: brotli, gzip"
	rxresp
	expect resp.http.ae == "gzip"

	txreq -url /third -hdr "accept-encoding: foo"
	rxresp
	expect resp.http.ae == "gzip"

	# however, if the request is pass'd to the backend, the header is left
	# untouched
	txreq -method POST -url /fourth -hdr "accept-encoding: bar"
	rxresp
	expect resp.http.ae == "bar"
} -run

you can run it with

varnishtest -v foo.vtc
# or with
docker run -it --rm -v `pwd`/foo.vtc:/tmp/foo.vtc varnish:7.2 varnishtest -v /tmp/foo.vtc

A sane backend should not break when using brotli as neither the current vcl nor the default Varnish announces support for it.

@hostep
Copy link
Contributor

hostep commented Nov 22, 2023

@gquintard: since you opened #38211 with the exact same changes, I think we can close this one?

@gquintard
Copy link
Contributor Author

ah, I forgot about this guy.

we can close either, but is there any chance this would get looked at? If not, we can just close both

@hostep
Copy link
Contributor

hostep commented Nov 22, 2023

I have no idea. A few months ago Adobe started an initiative where they prioritize PR's that have most 👍 reactions. So if you can gather like 15 or 20 👍 on one of your PR's, chances will be a lot higher it will get picked up. I know it's a stupid rule, but it's what it is ...

@gquintard
Copy link
Contributor Author

thanks, very good to know, lets social engineer some enthusiasm!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Auto-Tests: Not Required Changes in Pull Request does not require coverage by auto-tests Award: category of expertise Component: PageCache Priority: P4 No current plan to fix. Fixing can be deferred as a logical part of more important work. Release Line: 2.4 Risk: medium Severity: S3 Affects non-critical data or functionality and does not force users to employ a workaround. Triage: Dev.Experience Issue related to Developer Experience and needs help with Triage to Confirm or Reject it
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants