forked from ethereum/go-ethereum
-
Notifications
You must be signed in to change notification settings - Fork 281
mysterious errors occurs in USDC test suite #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
pdrobnjak
pushed a commit
to Tenderly/net-scroll-geth
that referenced
this issue
Apr 30, 2025
…h#42) * rpc: improve error codes for internal server errors (ethereum#25678) This changes the error code returned by the RPC server in certain situations: - handler panic: code -32603 - result marshaling error: code -32603 - attempt to subscribe via HTTP: code -32001 In all of the above cases, the server previously returned the default error code -32000. Co-authored-by: Nicholas Zhao <[email protected]> Co-authored-by: Felix Lange <[email protected]> * rpc: add PeerInfo (ethereum#24255) This replaces the sketchy and undocumented string context keys for HTTP requests with a defined interface. Using string keys with context is discouraged because they may clash with keys created by other packages. We added these keys to make connection metadata available in the signer, so this change also updates signer/core to use the new PeerInfo API. * graphql: add query timeout (ethereum#26116) This PR adds a 60 second timeout to graphql queries. * graphql, node, rpc: improve HTTP write timeout handling (ethereum#25457) Here we add special handling for sending an error response when the write timeout of the HTTP server is just about to expire. This is surprisingly difficult to get right, since is must be ensured that all output is fully flushed in time, which needs support from multiple levels of the RPC handler stack: The timeout response can't use chunked transfer-encoding because there is no way to write the final terminating chunk. net/http writes it when the topmost handler returns, but the timeout will already be over by the time that happens. We decided to disable chunked encoding by setting content-length explicitly. Gzip compression must also be disabled for timeout responses because we don't know the true content-length before compressing all output, i.e. compression would reintroduce chunked transfer-encoding. * eth/filters, eth/tracers: add request cancellation checks (ethereum#26320) This ensures that RPC method handlers will react to a timeout or cancelled request soon after the event occurs. Co-authored-by: Sina Mahmoodi <[email protected]> * rpc: add limit for batch request items and response size (ethereum#26681) This PR adds server-side limits for JSON-RPC batch requests. Before this change, batches were limited only by processing time. The server would pick calls from the batch and answer them until the response timeout occurred, then stop processing the remaining batch items. Here, we are adding two additional limits which can be configured: - the 'item limit': batches can have at most N items - the 'response size limit': batches can contain at most X response bytes These limits are optional in package rpc. In Geth, we set a default limit of 1000 items and 25MB response size. When a batch goes over the limit, an error response is returned to the client. However, doing this correctly isn't always possible. In JSON-RPC, only method calls with a valid `id` can be responded to. Since batches may also contain non-call messages or notifications, the best effort thing we can do to report an error with the batch itself is reporting the limit violation as an error for the first method call in the batch. If a batch is too large, but contains only notifications and responses, the error will be reported with a null `id`. The RPC client was also changed so it can deal with errors resulting from too large batches. An older client connected to the server code in this PR could get stuck until the request timeout occurred when the batch is too large. **Upgrading to a version of the RPC client containing this change is strongly recommended to avoid timeout issues.** For some weird reason, when writing the original client implementation, @fjl worked off of the assumption that responses could be distributed across batches arbitrarily. So for a batch request containing requests `[A B C]`, the server could respond with `[A B C]` but also with `[A B] [C]` or even `[A] [B] [C]` and it wouldn't make a difference to the client. So in the implementation of BatchCallContext, the client waited for all requests in the batch individually. If the server didn't respond to some of the requests in the batch, the client would eventually just time out (if a context was used). With the addition of batch limits into the server, we anticipate that people will hit this kind of error way more often. To handle this properly, the client now waits for a single response batch and expects it to contain all responses to the requests. --------- Co-authored-by: Felix Lange <[email protected]> Co-authored-by: Martin Holst Swende <[email protected]> * format * ethereum, ethclient: add FeeHistory support (ethereum#25403) Co-authored-by: Felix Lange <[email protected]> * internal/ethapi: return error when requesting invalid trie key (ethereum#25762) This change makes eth_getProof and eth_getStorageAt return an error when the argument contains invalid hex in storage keys. Co-authored-by: Felix Lange <[email protected]> * internal/ethapi: handle odd length hex in decodeHash (ethereum#25883) This change adds zero-padding (prefix) of odd nibbles in the decodeHash function. Co-authored-by: ty <[email protected]> * eth/filters, ethclient/gethclient: add fullTx option to pending tx filter (ethereum#25186) This PR adds a way to subscribe to the _full_ pending transactions, as opposed to just being notified about hashes. In use cases where client subscribes to newPendingTransactions and gets txhashes only to then request the actual transaction, the caller can now shortcut that flow and obtain the transactions directly. Co-authored-by: Felix Lange <[email protected]> * graphql: check header first in blocks query (ethereum#24190) Fixes ethereum#24167 New behaviour is that the endpoint returns results only for available blocks without returning an error when it doesn't find a block. Note we skip any block after a non-existent block. This adds a header fetch for every block in range (even if header is not needed). Alternatively, we could do the check in every field's resolver method to avoid this overhead. * graphql: embed *Resolver instead of backend interface (ethereum#25468) This creates some infrastructure to share resources between graphql API objects. * eth/filters: fix getLogs for pending block (ethereum#24949) * eth/filters: fix pending for getLogs * add pending method to test backend * fix block range validation * eth/filters: add global block logs cache (ethereum#25459) This adds a cache for block logs which is shared by all filters. The cache size of is configurable using the `--cache.blocklogs` flag. Co-authored-by: Felix Lange <[email protected]> * eth/filters: send rpctransactions in pending-subscription (ethereum#26126) This PR changes the pending tx subscription to return RPCTransaction types instead of normal Transaction objects. This will fix the inconsistencies with other tx returning API methods (i.e. getTransactionByHash), and also fill in the sender value for the tx. co-authored by @s1na * rpc: fix unmarshaling of null result in CallContext (ethereum#26723) The change fixes unmarshaling of JSON null results into json.RawMessage. --------- Co-authored-by: Jason Yuan <[email protected]> Co-authored-by: Jason Yuan <[email protected]> * eth/filters: fix a breaking change and return rpctransaction (ethereum#26757) * eth/filters: fix a breaking change and return rpctransaction * eth/filters: fix test cases --------- Co-authored-by: Catror <[email protected]> --------- Co-authored-by: Nicholas <[email protected]> Co-authored-by: Nicholas Zhao <[email protected]> Co-authored-by: Felix Lange <[email protected]> Co-authored-by: Ahmet Avci <[email protected]> Co-authored-by: Sina Mahmoodi <[email protected]> Co-authored-by: Sina Mahmoodi <[email protected]> Co-authored-by: mmsqe <[email protected]> Co-authored-by: Martin Holst Swende <[email protected]> Co-authored-by: lightclient <[email protected]> Co-authored-by: TY <[email protected]> Co-authored-by: ty <[email protected]> Co-authored-by: lmittmann <[email protected]> Co-authored-by: Jason Yuan <[email protected]> Co-authored-by: Jason Yuan <[email protected]> Co-authored-by: Yier <[email protected]> Co-authored-by: Catror <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
keystore: testchain.zip
genesis.json:
geth cmdline:
./build/bin/geth --datadir testchain --mine --unlock 0,1,2,3,4,5,6,7,8,9,10,11,12 --allow-insecure-unlock --miner.etherbase 0x221cb8Cd97223442754493A31184CF81F418E728 --maxpeers 0 --http --http.corsdomain https://remix.ethereum.org --http.api personal,eth,net,web3,debug
then run
yarn test
from centre-tokensExpected ERROR: Failed to trace data
The text was updated successfully, but these errors were encountered: