Skip to content

Add IQaudIO Sound Card support for Raspberry Pi #544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 1, 2014
Merged

Add IQaudIO Sound Card support for Raspberry Pi #544

merged 4 commits into from
Apr 1, 2014

Conversation

iqaudio
Copy link

@iqaudio iqaudio commented Mar 10, 2014

Request to merge into Raspberry Pi Linux, Would like to merge into Florian's ASOC fork too.

@popcornmix
Copy link
Collaborator

@koalo any comments?

@koalo
Copy link
Contributor

koalo commented Mar 18, 2014

I think it is ok. It could have been split up into multiple commits and the "NOT USED" lines could be removed (together with the whole snd_rpi_iqaudio_dac_init function), but all in all it looks good.

@amtssp
Copy link

amtssp commented Mar 30, 2014

I hope you will add this to the 3.13.y kernel as well.
What is the time schedule for this?
Thanks

hmbedded and others added 3 commits March 30, 2014 12:54
This is so that the correct rabge of values as specified
with the SOC_DOUBLE_R_RANGE_TLV macro are sent to the
hardware for both the normal and invert cases.
This allows limiting the output gain to avoid clipping in the
DAC ouput stages.
popcornmix added a commit that referenced this pull request Apr 1, 2014
Add IQaudIO Sound Card support for Raspberry Pi
@popcornmix popcornmix merged commit 41f5c50 into raspberrypi:rpi-3.10.y Apr 1, 2014
popcornmix pushed a commit that referenced this pull request Aug 19, 2020
[ Upstream commit f0a5e4d ]

YangYuxi is reporting that connection reuse
is causing one-second delay when SYN hits
existing connection in TIME_WAIT state.
Such delay was added to give time to expire
both the IPVS connection and the corresponding
conntrack. This was considered a rare case
at that time but it is causing problem for
some environments such as Kubernetes.

As nf_conntrack_tcp_packet() can decide to
release the conntrack in TIME_WAIT state and
to replace it with a fresh NEW conntrack, we
can use this to allow rescheduling just by
tuning our check: if the conntrack is
confirmed we can not schedule it to different
real server and the one-second delay still
applies but if new conntrack was created,
we are free to select new real server without
any delays.

YangYuxi lists some of the problem reports:

- One second connection delay in masquerading mode:
https://marc.info/?t=151683118100004&r=1&w=2

- IPVS low throughput #70747
kubernetes/kubernetes#70747

- Apache Bench can fill up ipvs service proxy in seconds #544
cloudnativelabs/kube-router#544

- Additional 1s latency in `host -> service IP -> pod`
kubernetes/kubernetes#90854

Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack")
Co-developed-by: YangYuxi <[email protected]>
Signed-off-by: YangYuxi <[email protected]>
Signed-off-by: Julian Anastasov <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
popcornmix pushed a commit that referenced this pull request Aug 19, 2020
[ Upstream commit f0a5e4d ]

YangYuxi is reporting that connection reuse
is causing one-second delay when SYN hits
existing connection in TIME_WAIT state.
Such delay was added to give time to expire
both the IPVS connection and the corresponding
conntrack. This was considered a rare case
at that time but it is causing problem for
some environments such as Kubernetes.

As nf_conntrack_tcp_packet() can decide to
release the conntrack in TIME_WAIT state and
to replace it with a fresh NEW conntrack, we
can use this to allow rescheduling just by
tuning our check: if the conntrack is
confirmed we can not schedule it to different
real server and the one-second delay still
applies but if new conntrack was created,
we are free to select new real server without
any delays.

YangYuxi lists some of the problem reports:

- One second connection delay in masquerading mode:
https://marc.info/?t=151683118100004&r=1&w=2

- IPVS low throughput #70747
kubernetes/kubernetes#70747

- Apache Bench can fill up ipvs service proxy in seconds #544
cloudnativelabs/kube-router#544

- Additional 1s latency in `host -> service IP -> pod`
kubernetes/kubernetes#90854

Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack")
Co-developed-by: YangYuxi <[email protected]>
Signed-off-by: YangYuxi <[email protected]>
Signed-off-by: Julian Anastasov <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
popcornmix pushed a commit that referenced this pull request Aug 19, 2020
[ Upstream commit f0a5e4d ]

YangYuxi is reporting that connection reuse
is causing one-second delay when SYN hits
existing connection in TIME_WAIT state.
Such delay was added to give time to expire
both the IPVS connection and the corresponding
conntrack. This was considered a rare case
at that time but it is causing problem for
some environments such as Kubernetes.

As nf_conntrack_tcp_packet() can decide to
release the conntrack in TIME_WAIT state and
to replace it with a fresh NEW conntrack, we
can use this to allow rescheduling just by
tuning our check: if the conntrack is
confirmed we can not schedule it to different
real server and the one-second delay still
applies but if new conntrack was created,
we are free to select new real server without
any delays.

YangYuxi lists some of the problem reports:

- One second connection delay in masquerading mode:
https://marc.info/?t=151683118100004&r=1&w=2

- IPVS low throughput #70747
kubernetes/kubernetes#70747

- Apache Bench can fill up ipvs service proxy in seconds #544
cloudnativelabs/kube-router#544

- Additional 1s latency in `host -> service IP -> pod`
kubernetes/kubernetes#90854

Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack")
Co-developed-by: YangYuxi <[email protected]>
Signed-off-by: YangYuxi <[email protected]>
Signed-off-by: Julian Anastasov <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
margro pushed a commit to margro/linux that referenced this pull request May 28, 2023
[ Upstream commit f0a5e4d ]

YangYuxi is reporting that connection reuse
is causing one-second delay when SYN hits
existing connection in TIME_WAIT state.
Such delay was added to give time to expire
both the IPVS connection and the corresponding
conntrack. This was considered a rare case
at that time but it is causing problem for
some environments such as Kubernetes.

As nf_conntrack_tcp_packet() can decide to
release the conntrack in TIME_WAIT state and
to replace it with a fresh NEW conntrack, we
can use this to allow rescheduling just by
tuning our check: if the conntrack is
confirmed we can not schedule it to different
real server and the one-second delay still
applies but if new conntrack was created,
we are free to select new real server without
any delays.

YangYuxi lists some of the problem reports:

- One second connection delay in masquerading mode:
https://marc.info/?t=151683118100004&r=1&w=2

- IPVS low throughput #70747
kubernetes/kubernetes#70747

- Apache Bench can fill up ipvs service proxy in seconds raspberrypi#544
cloudnativelabs/kube-router#544

- Additional 1s latency in `host -> service IP -> pod`
kubernetes/kubernetes#90854

Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack")
Co-developed-by: YangYuxi <[email protected]>
Signed-off-by: YangYuxi <[email protected]>
Signed-off-by: Julian Anastasov <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants