-
Notifications
You must be signed in to change notification settings - Fork 25
Closed
Labels
Description
Versions:
node: 22.10.0
"@graphql-hive/gateway": "1.6.6"
"@graphql-hive/gateway-runtime": "1.3.13"
Hi! We currently have a hive gateway setup to serve a supergraph.
We recently tried adding timeout settings per subgraph. Below is our config:
const allowedHeaders = [
//...
];
const subgraph1Location = process.env.SUBGRAPH_1_LOCATION;
const subgraph1Timeout = process.env.SUBGRAPH_1_TIMEOUT;
const subgraph2Location = process.env.SUBGRAPH_2_LOCATION;
const subgraph2Timeout = process.env.SUBGRAPH_2_TIMEOUT;
export const gatewayConfig = defineConfig({
supergraph: "supergraph.graphql",
port: parseInt(process.env.PORT || "8000"),
cors: {
origin: "*",
methods: ["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allowedHeaders,
},
landingPage: false,
executionCancellation: true,
upstreamCancellation: true,
disableIntrospection: true,
batching: false,
pollingInterval: 31_556_952_000,
maskedErrors: false,
graphiql: false,
transportEntries: {
'subgraph1': {
location: subgraph1Location,
options: {
timeout: subgraph1Timeout,
},
},
'subgraph2': {
location: subgraph2Location,
options: {
timeout: subgraph2Timeout,
},
},
}
});
After a couple of days, we observed a memory leak and traced it to this code change on our side.

After trying several things, the single change that fixed our memory leak was removing the options.timeout
setting in transportEntries. We moved the per subgraph timeout to a single setting on the newly added requestTimeout
option, and the memory leak was immediately fixed.
requestTimeout: process.env.REQUEST_TIMEOUT,
transportEntries: {
'subgraph1': {
location: subgraph1Location,
},
'subgraph2': {
location: subgraph2Location,
},