Quantcast
Channel: Intel Communities: Message List - Wired Ethernet
Viewing all 9952 articles
Browse latest View live

Issues with 82599 connected via XAUI

$
0
0

Hi,

     We have a custom board with 82599 connected to one of our processors directly by XAUI. The 82599 is also connected via PCIe to a linux x86 box. We are currently using our own driver for 82599, but we are not able to see the link up or see any packets on the other end.

Also, 82599 is connected to a blank EEPROM, so all the register values are at default values in the beginning. ( I don't think we can use the ixgbe driver because it heavily relies on EEPROM settings )

Setting the device in Tx->Rx loopback mode, we are able to see packets on the Rx side. So, things like DMA, Tx/Rx rings etc.. are all working properly.

 

My doubt is about the XAUI link part which seems very difficult to debug.

 

The following things are done during driver load :

  1. Soft reset (resets the registers to default values)
  2. In AUTOC, the following bits are set
    1. FLU = 1b (Force Link Up)
    2. ANSF = 1b (Auto-Neg selector field)
    3. 10G_PMA_PMD_PARALLEL  = 0 (XAUI)
    4. LMS = 001b (10 GbE parallel link KX4 no back-plane autonegotiation)
    5. KR_SUPPORT = 1b (default value)
    6. FECA = 1b (FEC Detect Ability, default value)
    7. ARNXAT = 11b (Backplane Auto-Negotiation Rx Align Threshold, default)
    8. ANRXDM = 1b (Auto-Negotiation Rx Drift Mode, default)
    9. ANRXLM = 1b (Auto-Negotiation Rx Loose Mode, default)
    10. KX_SUPPORT = 11b (default)
  3. And then AUTOC.Restart_AN is set
  4. I'm not seeing LINKS[Link Up]. So I manually set AUTOC[FLU] = 1 (Force Link Up)

 

I can't see any packets on both sides. My question is is any of those steps wrong? Any suggestions on what I can do next? Also, is it possible to get ixgbe to work with this setup?

 

Thanks,

Biju


Re: Cannot pass ipv6 packets into a VM using VFs (82599)

$
0
0

The issue has been found and addresses in RHEL6.5 in the ixgbe driver.

 

Since I wasn't getting anywhere working with Intel, I brought it to the attention of Redhat.  They

worked in conjunction with HP (it was their device) and the between the two of them,

the oncovered and solved the issue.

 

I cannot tell you what specifically it was, but they updated the ixgbe driver (3.15.1-k) and

rolled that out in the new RHEL6.5 kernel.

 

I suppose it will eventually rolled out back into the base intel driver.  As I stated earlier, the

multicast register was getting cleared when setting MTU size greater than 1500

bytes && SR-IOV interfaces were used.

Thanks,

Shawn

Re: Win8 Hyper-V and i217-V Support

$
0
0

Same here, but on Hyper-V Server 2012 R2. It freezes, and I have to restart or System Restore.

I217-V and Hyper-V 2012 R2 problem

$
0
0

Motherboard: ASUS Z87 Deluxe (built in Intel I217-V)

CPU: Intel i7 4771

 

I am trying to install drivers for the I217-V on Hyper-V Server 2012 R2, but with no luck. I can't seem to find any drivers except something called PROADMIN.exe.

 

Can anyone tell me how to install it on a Hyper-V Server 2012 R2 (Windows Server 2012 R2 Core)

Some VFs are assigned three IRQs and some receive only two

$
0
0

Greetings all,

 

We are using Ubunti-Precise 3.2.0-29-generic #46, with KVM version qemu-kvm-1.0 and libvirt 0.9.8. We are using Intel NIC 82599 with ixgbe driver 3.11.33 and in-tree ixgbevf driver 2.2.0-k. We are attaching VFs to KVM instances.

 

In lspci we see that MSI-X table for each VF has three entries, like this:

03:17.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

        Subsystem: Intel Corporation Device 7a11

        Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-

        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

        Latency: 0

        Region 0: [virtual] Memory at df478000 (64-bit, non-prefetchable) [size=16K]

        Region 3: [virtual] Memory at df578000 (64-bit, non-prefetchable) [size=16K]

        Capabilities: [70] MSI-X: Enable+ Count=3 Masked-

 

But when attaching the VF to KVM, we see that most of the VFs receive only two IRQs, while only a few receive three IRQs.

For example:

 

Dec 13 11:04:17 ubuntu-sata-31 kernel: [    4.448999] ixgbevf 0000:03:17.3: irq 359 for MSI/MSI-X

Dec 13 11:04:17 ubuntu-sata-31 kernel: [    4.449016] ixgbevf 0000:03:17.3: irq 360 for MSI/MSI-X

Dec 13 11:04:17 ubuntu-sata-31 kernel: [    4.449033] ixgbevf 0000:03:17.3: irq 361 for MSI/MSI-X

 

Dec 15 10:23:32 ubuntu-sata-31 kernel: [170107.490908] pci-stub 0000:03:12.2: irq 138 for MSI/MSI-X

Dec 15 10:23:32 ubuntu-sata-31 kernel: [170107.490924] pci-stub 0000:03:12.2: irq 139 for MSI/MSI-X

 

Dec 15 10:23:32 ubuntu-sata-31 kernel: [170107.730559] pci-stub 0000:03:1c.1: irq 335 for MSI/MSI-X

Dec 15 10:23:32 ubuntu-sata-31 kernel: [170107.730580] pci-stub 0000:03:1c.1: irq 336 for MSI/MSI-X

 

Initial debugging shows, that KVM reads the PCI configuration space to determine the number of IRQs to assign:

 

static int assigned_dev_update_msix_mmio(PCIDevice *pci_dev)

{

    AssignedDevice *adev = DO_UPCAST(AssignedDevice, dev, pci_dev);

    uint16_t entries_nr = 0, entries_max_nr;

    int pos = 0, i, r = 0;

    uint32_t msg_addr, msg_upper_addr, msg_data, msg_ctrl;

    struct kvm_assigned_msix_nr msix_nr;

    struct kvm_assigned_msix_entry msix_entry;

    void *va = adev->msix_table_page;

 

    pos = pci_find_capability(pci_dev, PCI_CAP_ID_MSIX);

 

    entries_max_nr = *(uint16_t *)(pci_dev->config + pos + 2);

    entries_max_nr &= PCI_MSIX_FLAGS_QSIZE;

    entries_max_nr += 1;

 

This yields entries_max_nr=3, as expected.

 

But then:

 

    /* Get the usable entry number for allocating */

    for (i = 0; i < entries_max_nr; i++) {

        memcpy(&msg_ctrl, va + i * 16 + 12, 4);

        memcpy(&msg_data, va + i * 16 + 8, 4);

        /* Ignore unused entry even it's unmasked */

        if (msg_data == 0)

            continue;

        entries_nr ++;

    }

 

And adding some prints shows that the third entry yields msg_data==0, so it is skipped.

 

ALEXL: PCIDEV(pci-assign) entries_max_nr=3

ALEXL: PCIDEV(pci-assign) entry #0 msg_data=16817

ALEXL: PCIDEV(pci-assign) entry #1 msg_data=16833

ALEXL: PCIDEV(pci-assign) entry #2 msg_data=0

 

ALEXL: PCIDEV(pci-assign) entries_max_nr=3

ALEXL: PCIDEV(pci-assign) entry #0 msg_data=16849

ALEXL: PCIDEV(pci-assign) entry #1 msg_data=16865

ALEXL: PCIDEV(pci-assign) entry #2 msg_data=0

 

ALEXL: PCIDEV(pci-assign) entries_max_nr=3

ALEXL: PCIDEV(pci-assign) entry #0 msg_data=16674

ALEXL: PCIDEV(pci-assign) entry #1 msg_data=16706

ALEXL: PCIDEV(pci-assign) entry #2 msg_data=0

 

We don't know what is the implication of having only two IRQs assigned instead of three. From overall perspective, those KVM instances function normally. Can anybody please comment on this?

 

When using SR-IOV cards from another vendor, we see that those other cards have 4 entries for each VF, and all those entries have IRQs attached. So we see this issue only with Intel cards.

 

Thanks,

Alex.

please add SCTP port in rss rx-flow-hash

$
0
0

We use 82599 ixgbe NIC. Driver version is 3.19.1. I read 82599 data sheet in Multiple Receive Queues Command Register- MRQC it doesn't support SCTP port in RSS. So if SCTP packets with same destination and source ip but different port number. They can't be distribute to different rx queue. Because SCTP hasn't UDP fragment issue. It is more reasonable to add SCTP port in rx-flow-hash.

SCTP checksum offloading performance issue

$
0
0

We use 82599 ixgbe NIC. Driver version is 3.19.1. Because we use user space SCTP stack. It can't use checksum offloading feature. I modify the driver code skip skb->ip_summed check. Because we use ip raw socket, sk_buff hasn't transport head ptr. I add ip head length to Transmit Context Descriptor (TDESC). At last our user space SCTP stack can use checksum offloading feature.

But I find it only can transmit 120k pps SCTP packet(MTU 1500). It seems HW checksum offloading leading lots of time delay. Please give some advice.

82576 NICs not showing anymore after uninstalling software while teamed

$
0
0

Last night I have been messing around with a Dell C6100 units with four nodes with each two 82576 adapters. At some point I installed PROset one of the nodes in order to try out teaming. Later I decided teaming was not needed any stupidly decided to remove the teaming software - without removing the 2 adapters from the team - result is that I now have one node which is under the impression that it has no network cards.

 

Re-installing PROset fails because no adapters could be found, I took a drive out of another node (with and without proset installed) to see if this would uncover anything - but no luck. It seems something has been written to the cards so that they will not announce themselves anymore.

 

I tried reflashing the BIOS and BMC, but this did not have any effect.

 

Anybody have a clue what I could do to get the adapters alive again?


Re: Cannot pass ipv6 packets into a VM using VFs (82599)

$
0
0

Hi Patrick,

 

Are you able to determine when this issue will be fixed, or are we required to migrate to red hat to get this working? :-)

 

Thanks

Regards

Kristoffer

Re: 82567LM keeps showing disconnect in event log

$
0
0

I'd like to add my two cents to this.  Here it is 2014 and I have been troubleshooting this issue on brand new Dell PC's we recently received.  The new PC's all came with 82579LM adapters.  I started getting most of the same issues reported in this thread and further more, on reboot or power on, if you logged in immediately after seeing the log in screen network connectivity would fail, with all sorts of errors in the event logs, mostly dealing with no DNS servers or Domain Controllers or Domains found, plus a slew of GPO and Profile errors because we use roaming profiles and folder redirection here.  To be fair, I did have an older Dell PC here that displayed the same issues on boot up or reboot and it has a Broadcom adapter.  There were basically 3 solutions that worked for me in resolving this issue.

 

1.  For the intermittent drop from the network thru-out the day, I had to disable power management on both the Intel and the Broadcom, although the Broadcom never had issues of dropping the connection intermittent during the day it did suffer network issues coming out of sleep mode.  Disabling power management fixed this issue.

 

2.  For the Intel and Broadcom adapters, I had to configure spanning-tree portfast edge on each port for these PC's on my Cisco 6509 switch.

 

3.  On the Intel adapters I downloaded and installed the most recent drivers, for me it is release 18.8, dated: 8/21/2013 and Version: 12.10.28.0.  I had to turn onWait for Link on the advanced tab of the NIC driver properties/configuration settings, Auto detect and Off did not work for me.

 

Once I did these things all seems well for about 2 weeks now.  I'll update if in the near future this fails for me.

Re: Cannot pass ipv6 packets into a VM using VFs (82599)

$
0
0

Looks like its working in version 3.19.1 ;-)

Re: Some VFs are assigned three IRQs and some receive only two

$
0
0

Hi Alex,

 

Our guess is that you have a mis-aligned PF and VF version.  We did some modifications of how many interrupts we use per VF a while back and went from 3 to 2.  Can you try the latest and greatest drivers located at http://sourceforge.net/projects/e1000/files/ and let us know how it went?

 

The latest IXGBE PF is 3.19.1 and the latest IXGBEvf drivers  is 2.12.1.

 

thanx,

 

Patrick

Re: Cannot pass ipv6 packets into a VM using VFs (82599)

$
0
0

Excellent!  Sorry I've been quiet of late - for some reason I've not been getting notifications of updates.  I'll go see what is going on and try to be more active.

 

- Patrick

Win XP with Pro/100 VE not connect to new DIR-655 router

$
0
0

Connected new DIR-655 router and one XP computer could not get DHCP address.  Multiple other computers worked fine.  Eventually tried disabling DHCP in old router (WRT54G) and inserting it between the Pro/100 and the DIR-655 and it worked.  Computer was addressed by the DIR-655.

 

I assume the issue is some capability of the DIR-655 that isn't handled by the Pro/100 but is NDHCP forwarded on by the WRT54G.  I'd like to know what to turn off in the DIR-655 to get it to work directly with the PRO/100.

I217-V unable to establish link at 1 Gb/s speed

$
0
0

The I217-V auto negotiates 100 Mb/s. Manually setting 1 Gb/s gives the message network cable unplugged. Is this a known bug?

 

The I217-v is build in on a ASUS Z87 Expert MB.

 

Any information regarding this is appreciated.


Re: I217-V No connection with 100 full duplex setting

$
0
0

I too have problem with a i217-v on a ASUS Z87 Expert MB. It will not work at higher that 100 Mb/s. I have the latest driver (dec 2013).

Re: Cannot pass ipv6 packets into a VM using VFs (82599)

Re: Cannot pass ipv6 packets into a VM using VFs (82599)

$
0
0

I believe I found something looking almost like that commit in the src code for 3.19.1 - which I have confirmed to be working, but I have not been able to find any change log for the official intel drivers...?

Re: Cannot pass ipv6 packets into a VM using VFs (82599)

$
0
0

Checked with engineering, this fix will be available in an official release late Q1 or early Q2.

Re: 82575EB onboard NICs support and teaming in W2012R2

$
0
0

just wanted to BUMP this thread. I see a new PROWinx64.exe version 18.8 but it does seem to support 82575EB NICs on w2012 R2. This mobo is still under warranty shouldn't drivers be made for the latest OS?

 

thanks

Viewing all 9952 articles
Browse latest View live