Quantcast
Channel: Intel Communities: Message List - Wired Ethernet
Viewing all 9952 articles
Browse latest View live

Re: An error occurred when updating NVM on X710 card

$
0
0

Hi,

I have uploaded the SSU output on the service request. Please check if you can receive it.


Re: x710 firmware update

$
0
0

Hi uqing, I've sent you private message.

regards,
Vince

Re: x710 SR-IOV problems

$
0
0

Hi,

 

as mentioned previously, I am not using Dell drivers, but ones sourced from Intel.

Not sure if ping-pong game is going to help, is Intel not a maker of drivers? Who is then supposed to know the most about issue I face?

 

Can you please provide info what those error messages mean, and if there is a workaround?

 

Thanks,

Ante

Re: Intel XL710

$
0
0

Hello,

 

Thanks for letting me know. I tried this out but there are still issues with speed.

 

Is there anything else that can be done?

 

Can you please let me know?

 

Thanks

Can't get information about Omnipath HFI on RHEL 7.3 hosts

$
0
0

Recently we got some new KNL nodes and decided to try RHEL 7.3 on these hosts running the 3.10.0-514.10.2.el7.x86_64 kernel.

 

After installing the IntelOPA-Basic software and upgrading the firmware on the HFI and rebooting the nodes, we still can't get anything other than the following from opainfo

 

[root@sknl0701 ~]# opainfo

oib_utils ERROR: [7534] open_verbs_ctx: failed to find verbs device

opainfo: Unable to open hfi:port 0:1

 

Even though the software and firmware never complains about any errors we can still see that even after forcing dracut to recreate the system image the hfi1 driver will not load.

 

[root@shas0101 ~]# lsmod | grep hfi1

hfi1                  633634  1

rdmavt                 57992  1 hfi1

ib_mad                 47817  5 hfi1,ib_cm,ib_sa,rdmavt,ib_umad

ib_core                98787  14 hfi1,rdma_cm,ib_cm,ib_sa,iw_cm,xprtrdma,ib_mad,ib_ucm,rdmavt,ib_iser,ib_umad,ib_uverbs,ib_ipoib,ib_isert

i2c_algo_bit           13413  2 hfi1,mgag200

i2c_core               40582  6 drm,hfi1,ipmi_ssif,drm_kms_helper,mgag200,i2c_algo_bit

 

 

[root@sknl0701 ~]# modprobe -v hfi1

[root@sknl0701 ~]# lsmod | grep hfi1

hfi1                  697628  0

rdmavt                 63294  1 hfi1

ib_core               210381  13 hfi1,rdma_cm,ib_cm,iw_cm,rpcrdma,ib_ucm,rdmavt,ib_iser,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib,ib_isert

i2c_algo_bit           13413  3 igb,hfi1,mgag200

i2c_core               40756  7 drm,igb,hfi1,ipmi_ssif,drm_kms_helper,mgag200,i2c_algo_bit

 

 

[root@sknl0701 ~]# yum info libibmad

Loaded plugins: product-id, search-disabled-repos, subscription-manager

Installed Packages

Name        : libibmad

Arch        : x86_64

Version     : 1.3.12

Release     : 1.el7

Size        : 132 k

Repo        : installed

From repo   : curc

Summary     : OpenFabrics Alliance InfiniBand MAD library

URL         : http://openfabrics.org/

License     : GPLv2 or BSD

Description : libibmad provides low layer IB functions for use by the IB diagnostic

            : and management programs. These include MAD, SA, SMP, and other basic

            : IB functions.

 

 

 

[root@sknl0701 ~]# yum info libibmad-devel

Loaded plugins: product-id, search-disabled-repos, subscription-manager

Installed Packages

Name        : libibmad-devel

Arch        : x86_64

Version     : 1.3.12

Release     : 1.el7

Size        : 50 k

Repo        : installed

From repo   : curc

Summary     : Development files for the libibmad library

URL         : http://openfabrics.org/

License     : GPLv2 or BSD

Description : Development files for the libibmad library.

 

libibmad was and has been installed on the new node as well, so I am out of ideas at the moment.  Any help would be appreciated!

Re: Ubuntu 16.04 and Intel XL710 SR-IOV - Packet drops

$
0
0

This is PCIe NIC bought with Cisco UCS server, Cisco PN is UCSC-PCIE-ID40GF (Intel XL710 dual-port 40G QSFP+ NIC).

 

 

We checked some statistics with ethtool (report without zeros bellow), no drops or errors, but port.VF_admin_queue_requests has some big value. Is this maybe indicates the problem?

 

KVM-1:~# ethtool -S enp6s0f0 | egrep -v :\ 0 | column

NIC statistics:                                      port.rx_multicast: 212105                       port.rx_size_255: 8                             port.VF_admin_queue_requests: 206741

     port.rx_bytes: 51293682                         port.tx_multicast: 3529                         port.rx_size_511: 5173                          port.fdir_atr_status: 1

     port.tx_bytes: 44178577                         port.rx_broadcast: 100                          port.tx_size_64: 139229                         port.fdir_sb_status: 1

     port.rx_unicast: 445716                         port.tx_broadcast: 59                           port.tx_size_127: 447849

     port.tx_unicast: 583502                         port.rx_size_127: 652740                        port.tx_size_255: 12

KVM-1:~#

Re: Can't get information about Omnipath HFI on RHEL 7.3 hosts

$
0
0

Hello @Wilshire461,

I am sure our Intel Omni-Path technical support team can assist you with this, however, they do not provide support through this venue.

I am forwarding your information for them to contact you directly. 

Have a nice day!

Dave A.

Re: Can't get information about Omnipath HFI on RHEL 7.3 hosts


X540-T2 for Server 2016 Cluster Network

$
0
0

I have a couple of questions regarding the use of an Intel X540-T2 10G Ethernet card as the main network card in a Windows Server 2016 cluster:

 

  1. I cannot find official Intel drivers for the X540-T2 for Windows Server 2016. The page at Download Intel® Network Adapter Driver for Windows® 10 indicates that the drivers provided in that package are not compatible with Windows Server 2016. Can someone point me in the direction of Intel provided Server 2016 compatible drivers for the X540-T2 cards?
  2. I am using a pair of these cards as the main network cards for a Windows Server 2016 cluster running the File Server role to provide a highly available set of file shares. The cards are currently using the Microsoft in-box drivers. When I configure the File Server role in the cluster, there's an error relating to the ISATAP tunnelling address not being able to be brought online. The error that's shown in the event log is:
    IPv6 tunnel address resource 'IP Address 2002:xxxx:xxxx:x:x:xxxx:aa.bb.cc.dd' failed to come online. Cluster network 'Cluster Network 1' associated with dependent IP address (IPv4) resource 'IP address aa.bb.cc.dd' does not support ISATAP tunnelling. Please ensure that the cluster network supports ISATAP tunnelling.This error was not showing for the cluster network itself, but has only appeared when I've configured the File Server Role. The error was also not showing for the same role that was running on an old pair of servers on Windows Server 2012 and using Broadcom NICs. I've searched online and there's not much out there about resolving this issue and I was wondering whether anyone has seen this issue before and has steps to resolve it?

 

Many thanks

Andy

Re: I210 Adapter restart triggers increased performance

$
0
0

We are good!  I am getting 112-114Mbs consistently now.  Thanks for your suggestions and your help!

Re: ULP enable/disable utility. Where to get?

$
0
0

Hi Looperx,

 I sent you a PM, please provide the information from there.

Thanks,
wb
 

Re: igb SR-IOV vf driver on FreeBSD strips VLAN tags

$
0
0

Hi ingenium,

Please feel free to provide the exact igb and igbvf driver version used. 

Thanks,
wb

Re: XL710 Malicious Driver Detection Event Occured

$
0
0

HI Hsuivan,

 Please feel free to provide the firmware version.

Thanks,
wb
 

Re: Issue with setting smp_affinity on ixgbe cards

$
0
0

Hi KM29,

Thank you for the update.  Have you tried turning off the irqbalance daemon? 

Thanks,
wb
 

Re: Issue with setting smp_affinity on ixgbe cards

$
0
0

Hi wb,

 

Since my target is manual handling of affinity, I don't even have the irqbalance installed.

So I guess it removes it out of the equation.

 

Best Regards,

Kula Nimbus


Re: XL710 Malicious Driver Detection Event Occured

$
0
0

Well, I'm not the original topic starter, but if you're still interested, here it is:

 

ixl0: <Intel(R) Ethernet Connection XL710/X722 Driver, Version - 1.6.6-k> mem 0xfb000000-0xfb7fffff,0xfb808000-0xfb80ffff irq 56 at device 0.0 numa-domain 1 on pci13

ixl0: Using MSIX interrupts with 9 vectors

ixl0: fw 5.0.40043 api 1.5 nvm 5.02 etid 80002391 oem 1.261.0

ixl0: PF-ID[0]: VFs 64, MSIX 129, VF MSIX 5, QPs 768, I2C

ixl0: Allocating 8 queues for PF LAN VSI; 8 queues active

ixl0: Ethernet address: 0c:c4:7a:19:71:e0

ixl0: PCI Express Bus: Speed 8.0GT/s Width x8

ixl0: SR-IOV ready

 

So it's 5.0.4, at least it's what the kernel shows.

 

nvmupdate64e shows different version, but also tells that update is not available:

 

./nvmupdate64e

 

Intel(R) Ethernet NVM Update Tool

NVMUpdate version 1.28.19.4

Copyright (C) 2013 - 2016 Intel Corporation.

 

 

WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.

Inventory in progress. Please wait [|.........]

 

 

Num Description                               Ver. DevId S:B    Status

=== ======================================== ===== ===== ====== ===============

01) Intel(R) 82599 10 Gigabit Dual Port      64.37  10FB 00:001 Update not    

    Network Connection                                          available

02) Intel(R) Ethernet Controller XL710 for    5.02  1583 00:131 Update not    

    40GbE QSFP+                                                 available

Re: I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

   Windows Creators Update has been released, I just performed an update. I am able to create a teaming but the "taming" nic remains disable. I tried to enable it, but no luck.

This could be just me, but I wanted to report the issue here in case others are experiencing the same.

 

Thank you

Re: I211/I217-V Windows 10 LACP teaming fails

$
0
0

Which driver version are you using? 12.15.24.1?

Re: I211/I217-V Windows 10 LACP teaming fails

$
0
0

12.15.184.0, downloaded the 22.1 package from intel site.

i219-v, interrupt moderation, slow downloads

$
0
0

Hi,

 

Brand new build: ASUS Z270-A, i7 7700K, NVMe SSD, 16GB RAM, fully updated Windows 10 Pro 64.

 

Recently paid ISP for faster connection.  All my machines - Win 10 laptops, older i7 desktop, ... get almost 300mbps download speeds.  The i219-v on the ASUS tops out at 250mbps.

 

Fully updated OS, BIOS, and Intel driver.  I've discovered that turning off adaptive interrupt moderation and setting to medium lets me get almost the full download speed on the ASUS.  All my other machines - much slower than the 7700K build - get full speed even in adaptive mode.

 

If I connect via the same cable and router using a cheap USB 3.0 - GigE dongle the ASUS/i219-v downloads at 300mbps so it doesn't seem to be a windows issue.

 

If I turn off interrupt moderation on all machines they get 250mbps but the i219-v only hits 200.  I'm concerned that the interrupt moderation "fix" is masking a deeper problem with this build.

 

Very concerning that the "fastest" machine in the house has the slowest network performance.

 

Please advise,

Z.

Viewing all 9952 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>