commit 707e840c5e24bb2df1ea4e753964275e257ec816 Author: Sasha Levin Date: Mon Jan 25 06:56:00 2016 -0500 Linux 3.18.26 Signed-off-by: Sasha Levin commit 1eacda6c0dc441384129b5260a485cbb8b4e218a Author: Sasha Levin Date: Mon Jan 25 07:00:06 2016 -0500 Revert "workqueue: make sure delayed work run in local cpu" This reverts commit 1e7af294dd037af63e74fe13e2b6afb93105ed3f. This commit is only needed on 4.1+ Signed-off-by: Sasha Levin commit d25b4531a808bd0faae3dcd0553421d0570373d1 Author: Yevgeny Pats Date: Tue Jan 19 22:09:04 2016 +0000 KEYS: Fix keyring ref leak in join_session_keyring() [ Upstream commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2 ] This fixes CVE-2016-0728. If a thread is asked to join as a session keyring the keyring that's already set as its session, we leak a keyring reference. This can be tested with the following program: #include #include #include #include int main(int argc, const char *argv[]) { int i = 0; key_serial_t serial; serial = keyctl(KEYCTL_JOIN_SESSION_KEYRING, "leaked-keyring"); if (serial < 0) { perror("keyctl"); return -1; } if (keyctl(KEYCTL_SETPERM, serial, KEY_POS_ALL | KEY_USR_ALL) < 0) { perror("keyctl"); return -1; } for (i = 0; i < 100; i++) { serial = keyctl(KEYCTL_JOIN_SESSION_KEYRING, "leaked-keyring"); if (serial < 0) { perror("keyctl"); return -1; } } return 0; } If, after the program has run, there something like the following line in /proc/keys: 3f3d898f I--Q--- 100 perm 3f3f0000 0 0 keyring leaked-keyring: empty with a usage count of 100 * the number of times the program has been run, then the kernel is malfunctioning. If leaked-keyring has zero usages or has been garbage collected, then the problem is fixed. Reported-by: Yevgeny Pats Signed-off-by: David Howells Acked-by: Don Zickus Acked-by: Prarit Bhargava Acked-by: Jarod Wilson Signed-off-by: James Morris Signed-off-by: Sasha Levin commit e41946e47ec501023afd7e5dfeb794ab7492e7c0 Author: David Howells Date: Fri Dec 18 01:34:26 2015 +0000 KEYS: Fix race between read and revoke [ Upstream commit b4a1b4f5047e4f54e194681125c74c0aa64d637d ] This fixes CVE-2015-7550. There's a race between keyctl_read() and keyctl_revoke(). If the revoke happens between keyctl_read() checking the validity of a key and the key's semaphore being taken, then the key type read method will see a revoked key. This causes a problem for the user-defined key type because it assumes in its read method that there will always be a payload in a non-revoked key and doesn't check for a NULL pointer. Fix this by making keyctl_read() check the validity of a key after taking semaphore instead of before. I think the bug was introduced with the original keyrings code. This was discovered by a multithreaded test program generated by syzkaller (http://github.com/google/syzkaller). Here's a cleaned up version: #include #include #include void *thr0(void *arg) { key_serial_t key = (unsigned long)arg; keyctl_revoke(key); return 0; } void *thr1(void *arg) { key_serial_t key = (unsigned long)arg; char buffer[16]; keyctl_read(key, buffer, 16); return 0; } int main() { key_serial_t key = add_key("user", "%", "foo", 3, KEY_SPEC_USER_KEYRING); pthread_t th[5]; pthread_create(&th[0], 0, thr0, (void *)(unsigned long)key); pthread_create(&th[1], 0, thr1, (void *)(unsigned long)key); pthread_create(&th[2], 0, thr0, (void *)(unsigned long)key); pthread_create(&th[3], 0, thr1, (void *)(unsigned long)key); pthread_join(th[0], 0); pthread_join(th[1], 0); pthread_join(th[2], 0); pthread_join(th[3], 0); return 0; } Build as: cc -o keyctl-race keyctl-race.c -lkeyutils -lpthread Run as: while keyctl-race; do :; done as it may need several iterations to crash the kernel. The crash can be summarised as: BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 IP: [] user_read+0x56/0xa3 ... Call Trace: [] keyctl_read_key+0xb6/0xd7 [] SyS_keyctl+0x83/0xe0 [] entry_SYSCALL_64_fastpath+0x12/0x6f Reported-by: Dmitry Vyukov Signed-off-by: David Howells Tested-by: Dmitry Vyukov Cc: stable@vger.kernel.org Signed-off-by: James Morris Signed-off-by: Sasha Levin commit e49b606c05bf60d5b049c1a4ec0784a26c70745d Author: WANG Cong Date: Wed Dec 16 23:39:04 2015 -0800 net: check both type and procotol for tcp sockets [ Upstream commit ac5cc977991d2dce85fc734a6c71ddb33f6fe3c1 ] Dmitry reported the following out-of-bound access: Call Trace: [] __asan_report_load4_noabort+0x3e/0x40 mm/kasan/report.c:294 [] sock_setsockopt+0x1284/0x13d0 net/core/sock.c:880 [< inline >] SYSC_setsockopt net/socket.c:1746 [] SyS_setsockopt+0x1fe/0x240 net/socket.c:1729 [] entry_SYSCALL_64_fastpath+0x16/0x7a arch/x86/entry/entry_64.S:185 This is because we mistake a raw socket as a tcp socket. We should check both sk->sk_type and sk->sk_protocol to ensure it is a tcp socket. Willem points out __skb_complete_tx_timestamp() needs to fix as well. Reported-by: Dmitry Vyukov Cc: Willem de Bruijn Cc: Eric Dumazet Signed-off-by: Cong Wang Acked-by: Willem de Bruijn Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit b3bd889da33fb21063ac8b9ccc632339b7f0ffcf Author: Ben Hutchings Date: Wed Nov 18 02:01:21 2015 +0000 usb: Use the USB_SS_MULT() macro to decode burst multiplier for log message [ Upstream commit 5377adb092664d336ac212499961cac5e8728794 ] usb_parse_ss_endpoint_companion() now decodes the burst multiplier correctly in order to check that it's <= 3, but still uses the wrong expression if warning that it's > 3. Fixes: ff30cbc8da42 ("usb: Use the USB_SS_MULT() macro to get the ...") Signed-off-by: Ben Hutchings Cc: stable Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 292e5af9a1eebd2ab619e52ad81bd2fc41f4db6b Author: Hans Yang Date: Tue Dec 1 16:54:59 2015 +0800 usb: core : hub: Fix BOS 'NULL pointer' kernel panic [ Upstream commit 464ad8c43a9ead98c2b0eaed86bea727f2ad106e ] When a USB 3.0 mass storage device is disconnected in transporting state, storage device driver may handle it as a transport error and reset the device by invoking usb_reset_and_verify_device() and following could happen: in usb_reset_and_verify_device(): udev->bos = NULL; For U1/U2 enabled devices, driver will disable LPM, and in some conditions: from usb_unlocked_disable_lpm() --> usb_disable_lpm() --> usb_enable_lpm() udev->bos->ss_cap->bU1devExitLat; And it causes 'NULL pointer' and 'kernel panic': [ 157.976257] Unable to handle kernel NULL pointer dereference at virtual address 00000010 ... [ 158.026400] PC is at usb_enable_link_state+0x34/0x2e0 [ 158.031442] LR is at usb_enable_lpm+0x98/0xac ... [ 158.137368] [] usb_enable_link_state+0x34/0x2e0 [ 158.143451] [] usb_enable_lpm+0x94/0xac [ 158.148840] [] usb_disable_lpm+0xa8/0xb4 ... [ 158.214954] Kernel panic - not syncing: Fatal exception This commit moves 'udev->bos = NULL' behind usb_unlocked_disable_lpm() to prevent from NULL pointer access. Issue can be reproduced by following setup: 1) A SS pen drive behind a SS hub connected to the host. 2) Transporting data between the pen drive and the host. 3) Abruptly disconnect hub and pen drive from host. 4) With a chance it crashes. Signed-off-by: Hans Yang Acked-by: Alan Stern Cc: stable Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 51a55c40f8870d8260ba93b5c280a7fe579d956a Author: Arnd Bergmann Date: Wed Nov 18 17:18:40 2015 +0100 usb: musb: USB_TI_CPPI41_DMA requires dmaengine support [ Upstream commit 183e53e8ddf4165c3763181682189362d6b403f7 ] The CPPI-4.1 driver selects TI_CPPI41, which is a dmaengine driver and that may not be available when CONFIG_DMADEVICES is not set: warning: (USB_TI_CPPI41_DMA) selects TI_CPPI41 which has unmet direct dependencies (DMADEVICES && ARCH_OMAP) This adds an extra dependency to avoid generating warnings in randconfig builds. Ideally we'd remove the 'select' statement, but that has the potential to break defconfig files. Signed-off-by: Arnd Bergmann Fixes: 411dd19c682d ("usb: musb: Kconfig: Select the DMA driver if DMA mode of MUSB is enabled") Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin commit 8b6655c00766c81a107e3017e6598c2346d07d1a Author: Felipe Balbi Date: Wed Nov 18 17:06:00 2015 -0600 usb: gadget: pxa27x: fix suspend callback [ Upstream commit 391e6dcb37857d5659b53def2f41e2f56850d33c ] pxa27x disconnects pullups on suspend but doesn't notify the gadget driver about it, so gadget driver can't disable the endpoints it was using. This causes problems on resume because gadget core will think endpoints are still enabled and just ignore the following usb_ep_enable(). Fix this problem by calling gadget_driver->disconnect(). Cc: # v3.10+ Tested-by: Robert Jarzmik Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin commit a4e9e566beca4e3adbc8689bb5b583cd385572ea Author: Alexey Khoroshilov Date: Sat Nov 21 00:36:44 2015 +0300 USB: whci-hcd: add check for dma mapping error [ Upstream commit f9fa1887dcf26bd346665a6ae3d3f53dec54cba1 ] qset_fill_page_list() do not check for dma mapping errors. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Alexey Khoroshilov Cc: stable Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 782f17fde163a785e9bea86605aaab540f831bd8 Author: Alan Stern Date: Thu Dec 10 15:27:21 2015 -0500 USB: add quirk for devices with broken LPM [ Upstream commit ad87e03213b552a5c33d5e1e7a19a73768397010 ] Some USB device / host controller combinations seem to have problems with Link Power Management. For example, Steinar found that his xHCI controller wouldn't handle bandwidth calculations correctly for two video cards simultaneously when LPM was enabled, even though the bus had plenty of bandwidth available. This patch introduces a new quirk flag for devices that should remain disabled for LPM, and creates quirk entries for Steinar's devices. Signed-off-by: Alan Stern Reported-by: Steinar H. Gunderson Cc: stable Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 5163e2188b94e9141c1e8e281398c616201fd4b5 Author: Konstantin Shkolnyy Date: Tue Nov 10 16:40:13 2015 -0600 USB: cp210x: Remove CP2110 ID from compatibility list [ Upstream commit 7c90e610b60cd1ed6abafd806acfaedccbbe52d1 ] CP2110 ID (0x10c4, 0xea80) doesn't belong here because it's a HID and completely different from CP210x devices. Signed-off-by: Konstantin Shkolnyy Cc: stable Signed-off-by: Johan Hovold Signed-off-by: Sasha Levin commit fc70c8a467f73028cd98a010b19a12be101d7dc2 Author: Jonas Jonsson Date: Sun Nov 22 11:47:18 2015 +0100 USB: serial: Another Infineon flash loader USB ID [ Upstream commit a0e80fbd56b4573de997c9a088a33abbc1121400 ] The flash loader has been seen on a Telit UE910 modem. The flash loader is a bit special, it presents both an ACM and CDC Data interface but only the latter is useful. Unless a magic string is sent to the device it will disappear and the regular modem device appears instead. Signed-off-by: Jonas Jonsson Tested-by: Daniele Palmas Cc: stable Signed-off-by: Johan Hovold Signed-off-by: Sasha Levin commit 753c4e20b6630a6fe5e3a931e8b50855e122c757 Author: Jonas Jonsson Date: Sun Nov 22 11:47:17 2015 +0100 USB: cdc_acm: Ignore Infineon Flash Loader utility [ Upstream commit f33a7f72e5fc033daccbb8d4753d7c5c41a4d67b ] Some modems, such as the Telit UE910, are using an Infineon Flash Loader utility. It has two interfaces, 2/2/0 (Abstract Modem) and 10/0/0 (CDC Data). The latter can be used as a serial interface to upgrade the firmware of the modem. However, that isn't possible when the cdc-acm driver takes control of the device. The following is an explanation of the behaviour by Daniele Palmas during discussion on linux-usb. "This is what happens when the device is turned on (without modifying the drivers): [155492.352031] usb 1-3: new high-speed USB device number 27 using ehci-pci [155492.485429] usb 1-3: config 1 interface 0 altsetting 0 endpoint 0x81 has an invalid bInterval 255, changing to 11 [155492.485436] usb 1-3: New USB device found, idVendor=058b, idProduct=0041 [155492.485439] usb 1-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [155492.485952] cdc_acm 1-3:1.0: ttyACM0: USB ACM device This is the flashing device that is caught by the cdc-acm driver. Once the ttyACM appears, the application starts sending a magic string (simple write on the file descriptor) to keep the device in flashing mode. If this magic string is not properly received in a certain time interval, the modem goes on in normal operative mode: [155493.748094] usb 1-3: USB disconnect, device number 27 [155494.916025] usb 1-3: new high-speed USB device number 28 using ehci-pci [155495.059978] usb 1-3: New USB device found, idVendor=1bc7, idProduct=0021 [155495.059983] usb 1-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [155495.059986] usb 1-3: Product: 6 CDC-ACM + 1 CDC-ECM [155495.059989] usb 1-3: Manufacturer: Telit [155495.059992] usb 1-3: SerialNumber: 359658044004697 [155495.138958] cdc_acm 1-3:1.0: ttyACM0: USB ACM device [155495.140832] cdc_acm 1-3:1.2: ttyACM1: USB ACM device [155495.142827] cdc_acm 1-3:1.4: ttyACM2: USB ACM device [155495.144462] cdc_acm 1-3:1.6: ttyACM3: USB ACM device [155495.145967] cdc_acm 1-3:1.8: ttyACM4: USB ACM device [155495.147588] cdc_acm 1-3:1.10: ttyACM5: USB ACM device [155495.154322] cdc_ether 1-3:1.12 wwan0: register 'cdc_ether' at usb-0000:00:1a.7-3, Mobile Broadband Network Device, 00:00:11:12:13:14 Using the cdc-acm driver, the string, though being sent in the same way than using the usb-serial-simple driver (I can confirm that the data is passing properly since I used an hw usb sniffer), does not make the device to stay in flashing mode." Signed-off-by: Jonas Jonsson Tested-by: Daniele Palmas Cc: stable Signed-off-by: Johan Hovold Signed-off-by: Sasha Levin commit b51cf48b355ae33db38de69fbbe4ef49ad260d5f Author: Ilya Dryomov Date: Sun Oct 11 19:38:00 2015 +0200 rbd: don't leak parent_spec in rbd_dev_probe_parent() [ Upstream commit 1f2c6651f69c14d0d3a9cfbda44ea101b02160ba ] Currently we leak parent_spec and trigger a "parent reference underflow" warning if rbd_dev_create() in rbd_dev_probe_parent() fails. The problem is we take the !parent out_err branch and that only drops refcounts; parent_spec that would've been freed had we called rbd_dev_unparent() remains and triggers rbd_warn() in rbd_dev_parent_put() - at that point we have parent_spec != NULL and parent_ref == 0, so counter ends up being -1 after the decrement. Redo rbd_dev_probe_parent() to fix this. Cc: stable@vger.kernel.org # 3.10+, needs backporting for < 4.2 Signed-off-by: Ilya Dryomov Reviewed-by: Alex Elder Signed-off-by: Sasha Levin commit e7b7ee7bff87920c55281338c848fd148567c23f Author: Sasha Levin Date: Tue Sep 8 10:53:40 2015 -0400 RDS: verify the underlying transport exists before creating a connection [ Upstream commit 74e98eb085889b0d2d4908f59f6e00026063014f ] There was no verification that an underlying transport exists when creating a connection, this would cause dereferencing a NULL ptr. It might happen on sockets that weren't properly bound before attempting to send a message, which will cause a NULL ptr deref: [135546.047719] kasan: GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC KASAN [135546.051270] Modules linked in: [135546.051781] CPU: 4 PID: 15650 Comm: trinity-c4 Not tainted 4.2.0-next-20150902-sasha-00041-gbaa1222-dirty #2527 [135546.053217] task: ffff8800835bc000 ti: ffff8800bc708000 task.ti: ffff8800bc708000 [135546.054291] RIP: __rds_conn_create (net/rds/connection.c:194) [135546.055666] RSP: 0018:ffff8800bc70fab0 EFLAGS: 00010202 [135546.056457] RAX: dffffc0000000000 RBX: 0000000000000f2c RCX: ffff8800835bc000 [135546.057494] RDX: 0000000000000007 RSI: ffff8800835bccd8 RDI: 0000000000000038 [135546.058530] RBP: ffff8800bc70fb18 R08: 0000000000000001 R09: 0000000000000000 [135546.059556] R10: ffffed014d7a3a23 R11: ffffed014d7a3a21 R12: 0000000000000000 [135546.060614] R13: 0000000000000001 R14: ffff8801ec3d0000 R15: 0000000000000000 [135546.061668] FS: 00007faad4ffb700(0000) GS:ffff880252000000(0000) knlGS:0000000000000000 [135546.062836] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [135546.063682] CR2: 000000000000846a CR3: 000000009d137000 CR4: 00000000000006a0 [135546.064723] Stack: [135546.065048] ffffffffafe2055c ffffffffafe23fc1 ffffed00493097bf ffff8801ec3d0008 [135546.066247] 0000000000000000 00000000000000d0 0000000000000000 ac194a24c0586342 [135546.067438] 1ffff100178e1f78 ffff880320581b00 ffff8800bc70fdd0 ffff880320581b00 [135546.068629] Call Trace: [135546.069028] ? __rds_conn_create (include/linux/rcupdate.h:856 net/rds/connection.c:134) [135546.069989] ? rds_message_copy_from_user (net/rds/message.c:298) [135546.071021] rds_conn_create_outgoing (net/rds/connection.c:278) [135546.071981] rds_sendmsg (net/rds/send.c:1058) [135546.072858] ? perf_trace_lock (include/trace/events/lock.h:38) [135546.073744] ? lockdep_init (kernel/locking/lockdep.c:3298) [135546.074577] ? rds_send_drop_to (net/rds/send.c:976) [135546.075508] ? __might_fault (./arch/x86/include/asm/current.h:14 mm/memory.c:3795) [135546.076349] ? __might_fault (mm/memory.c:3795) [135546.077179] ? rds_send_drop_to (net/rds/send.c:976) [135546.078114] sock_sendmsg (net/socket.c:611 net/socket.c:620) [135546.078856] SYSC_sendto (net/socket.c:1657) [135546.079596] ? SYSC_connect (net/socket.c:1628) [135546.080510] ? trace_dump_stack (kernel/trace/trace.c:1926) [135546.081397] ? ring_buffer_unlock_commit (kernel/trace/ring_buffer.c:2479 kernel/trace/ring_buffer.c:2558 kernel/trace/ring_buffer.c:2674) [135546.082390] ? trace_buffer_unlock_commit (kernel/trace/trace.c:1749) [135546.083410] ? trace_event_raw_event_sys_enter (include/trace/events/syscalls.h:16) [135546.084481] ? do_audit_syscall_entry (include/trace/events/syscalls.h:16) [135546.085438] ? trace_buffer_unlock_commit (kernel/trace/trace.c:1749) [135546.085515] rds_ib_laddr_check(): addr 36.74.25.172 ret -99 node type -1 Acked-by: Santosh Shilimkar Signed-off-by: Sasha Levin Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 680a7411284b379d611b62feabc2bfb9541f7aa1 Author: Emmanuel Grumbach Date: Mon Jan 5 09:42:40 2015 +0200 iwlwifi: bump firmware API for mvm devices to 12 [ Upstream commit 91f491fd7dfbae6e5ce5887293723d818adf7d5d ] This allows 3160 / 7260 / 7265 / 7265D / 8000 devices to use the latest version of the firmware. Signed-off-by: Emmanuel Grumbach Signed-off-by: Sasha Levin commit 2fef208eb570781bc93cba867538a515360435a8 Author: Emmanuel Grumbach Date: Wed Dec 31 12:31:46 2014 +0200 iwlwifi: 7000: fix reported firmware name for 7265D [ Upstream commit a443f5e16bb54fc0de693f92c79c8fb95edfbc20 ] We were advertising iwlwifi-7265-X.ucode instead of iwlwifi-7265D-X.ucode. Fix this. Signed-off-by: Emmanuel Grumbach Signed-off-by: Sasha Levin commit 7f765fcd744a381abdaf3c2a2307b95e8f5c0f63 Author: Lu, Han Date: Wed Nov 11 16:54:27 2015 +0800 ALSA: hda/hdmi - apply Skylake fix-ups to Broxton display codec [ Upstream commit e2656412f2a7343ecfd13eb74bac0a6e6e9c5aad ] Broxton and Skylake have the same behavior on display audio. So this patch applys Skylake fix-ups to Broxton. Signed-off-by: Lu, Han Cc: Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin commit 27327566b15132cfecc1ef056c26ea8ccceabd6e Author: Arnd Bergmann Date: Wed Sep 30 15:04:42 2015 +0200 ceph: fix message length computation [ Upstream commit 777d738a5e58ba3b6f3932ab1543ce93703f4873 ] create_request_message() computes the maximum length of a message, but uses the wrong type for the time stamp: sizeof(struct timespec) may be 8 or 16 depending on the architecture, while sizeof(struct ceph_timespec) is always 8, and that is what gets put into the message. Found while auditing the uses of timespec for y2038 problems. Fixes: b8e69066d8af ("ceph: include time stamp in every MDS request") Signed-off-by: Arnd Bergmann Signed-off-by: Yan, Zheng Signed-off-by: Sasha Levin commit 33fde5c22c1a352ec56a7500ab94cdfc7574faed Author: Junxiao Bi Date: Fri Nov 20 15:57:30 2015 -0800 ocfs2: fix umask ignored issue [ Upstream commit 8f1eb48758aacf6c1ffce18179295adbf3bd7640 ] New created file's mode is not masked with umask, and this makes umask not work for ocfs2 volume. Fixes: 702e5bc ("ocfs2: use generic posix ACL infrastructure") Signed-off-by: Junxiao Bi Cc: Gang He Cc: Mark Fasheh Cc: Joel Becker Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin commit 2de7d46289702bcc5727cb70857f8f6860c85ce8 Author: Jeff Layton Date: Wed Nov 25 13:50:11 2015 -0500 nfs: if we have no valid attrs, then don't declare the attribute cache valid [ Upstream commit c812012f9ca7cf89c9e1a1cd512e6c3b5be04b85 ] If we pass in an empty nfs_fattr struct to nfs_update_inode, it will (correctly) not update any of the attributes, but it then clears the NFS_INO_INVALID_ATTR flag, which indicates that the attributes are up to date. Don't clear the flag if the fattr struct has no valid attrs to apply. Reviewed-by: Steve French Signed-off-by: Jeff Layton Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust Signed-off-by: Sasha Levin commit d48e82da8b064ed3ce8bfa2d8b17d8566e811f4f Author: Benjamin Coddington Date: Fri Nov 20 09:56:20 2015 -0500 nfs4: start callback_ident at idr 1 [ Upstream commit c68a027c05709330fe5b2f50c50d5fa02124b5d8 ] If clp->cl_cb_ident is zero, then nfs_cb_idr_remove_locked() skips removing it when the nfs_client is freed. A decoding or server bug can then find and try to put that first nfs_client which would lead to a crash. Signed-off-by: Benjamin Coddington Fixes: d6870312659d ("nfs4client: convert to idr_alloc()") Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust Signed-off-by: Sasha Levin commit e46b3f459f2efa11329013b3df96723ec4d2f314 Author: Jeff Layton Date: Thu Sep 17 07:47:08 2015 -0400 nfsd: serialize state seqid morphing operations [ Upstream commit 35a92fe8770ce54c5eb275cd76128645bea2d200 ] Andrew was seeing a race occur when an OPEN and OPEN_DOWNGRADE were running in parallel. The server would receive the OPEN_DOWNGRADE first and check its seqid, but then an OPEN would race in and bump it. The OPEN_DOWNGRADE would then complete and bump the seqid again. The result was that the OPEN_DOWNGRADE would be applied after the OPEN, even though it should have been rejected since the seqid changed. The only recourse we have here I think is to serialize operations that bump the seqid in a stateid, particularly when we're given a seqid in the call. To address this, we add a new rw_semaphore to the nfs4_ol_stateid struct. We do a down_write prior to checking the seqid after looking up the stateid to ensure that nothing else is going to bump it while we're operating on it. In the case of OPEN, we do a down_read, as the call doesn't contain a seqid. Those can run in parallel -- we just need to serialize them when there is a concurrent OPEN_DOWNGRADE or CLOSE. LOCK and LOCKU however always take the write lock as there is no opportunity for parallelizing those. Reported-and-Tested-by: Andrew W Elble Signed-off-by: Jeff Layton Cc: stable@vger.kernel.org Signed-off-by: J. Bruce Fields Signed-off-by: Sasha Levin commit 1d985e6898cd3fccd11c9f9d37a2f86ff72311c9 Author: Stefan Richter Date: Tue Nov 3 01:46:21 2015 +0100 firewire: ohci: fix JMicron JMB38x IT context discovery [ Upstream commit 100ceb66d5c40cc0c7018e06a9474302470be73c ] Reported by Clifford and Craig for JMicron OHCI-1394 + SDHCI combo controllers: Often or even most of the time, the controller is initialized with the message "added OHCI v1.10 device as card 0, 4 IR + 0 IT contexts, quirks 0x10". With 0 isochronous transmit DMA contexts (IT contexts), applications like audio output are impossible. However, OHCI-1394 demands that at least 4 IT contexts are implemented by the link layer controller, and indeed JMicron JMB38x do implement four of them. Only their IsoXmitIntMask register is unreliable at early access. With my own JMB381 single function controller I found: - I can reproduce the problem with a lower probability than Craig's. - If I put a loop around the section which clears and reads IsoXmitIntMask, then either the first or the second attempt will return the correct initial mask of 0x0000000f. I never encountered a case of needing more than a second attempt. - Consequently, if I put a dummy reg_read(...IsoXmitIntMaskSet) before the first write, the subsequent read will return the correct result. - If I merely ignore a wrong read result and force the known real result, later isochronous transmit DMA usage works just fine. So let's just fix this chip bug up by the latter method. Tested with JMB381 on kernel 3.13 and 4.3. Since OHCI-1394 generally requires 4 IT contexts at a minium, this workaround is simply applied whenever the initial read of IsoXmitIntMask returns 0, regardless whether it's a JMicron chip or not. I never heard of this issue together with any other chip though. I am not 100% sure that this fix works on the OHCI-1394 part of JMB380 and JMB388 combo controllers exactly the same as on the JMB381 single- function controller, but so far I haven't had a chance to let an owner of a combo chip run a patched kernel. Strangely enough, IsoRecvIntMask is always reported correctly, even though it is probed right before IsoXmitIntMask. Reported-by: Clifford Dunn Reported-by: Craig Moore Cc: stable@vger.kernel.org Signed-off-by: Stefan Richter Signed-off-by: Sasha Levin commit 8fecc1e2c4b4a71abbbead8afe3098fce2863569 Author: Daeho Jeong Date: Sun Oct 18 17:02:56 2015 -0400 ext4, jbd2: ensure entering into panic after recording an error in superblock [ Upstream commit 4327ba52afd03fc4b5afa0ee1d774c9c5b0e85c5 ] If a EXT4 filesystem utilizes JBD2 journaling and an error occurs, the journaling will be aborted first and the error number will be recorded into JBD2 superblock and, finally, the system will enter into the panic state in "errors=panic" option. But, in the rare case, this sequence is little twisted like the below figure and it will happen that the system enters into panic state, which means the system reset in mobile environment, before completion of recording an error in the journal superblock. In this case, e2fsck cannot recognize that the filesystem failure occurred in the previous run and the corruption wouldn't be fixed. Task A Task B ext4_handle_error() -> jbd2_journal_abort() -> __journal_abort_soft() -> __jbd2_journal_abort_hard() | -> journal->j_flags |= JBD2_ABORT; | | __ext4_abort() | -> jbd2_journal_abort() | | -> __journal_abort_soft() | | -> if (journal->j_flags & JBD2_ABORT) | | return; | -> panic() | -> jbd2_journal_update_sb_errno() Tested-by: Hobin Woo Signed-off-by: Daeho Jeong Signed-off-by: Theodore Ts'o Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin commit ebf6b5329105efecf9b9dfc45d486b97cecbf86e Author: Ilya Dryomov Date: Fri Nov 27 19:23:24 2015 +0100 rbd: don't put snap_context twice in rbd_queue_workfn() [ Upstream commit 70b16db86f564977df074072143284aec2cb1162 ] Commit 4e752f0ab0e8 ("rbd: access snapshot context and mapping size safely") moved ceph_get_snap_context() out of rbd_img_request_create() and into rbd_queue_workfn(), adding a ceph_put_snap_context() to the error path in rbd_queue_workfn(). However, rbd_img_request_create() consumes a ref on snapc, so calling ceph_put_snap_context() after a successful rbd_img_request_create() leads to an extra put. Fix it. Cc: stable@vger.kernel.org # 3.18+ Signed-off-by: Ilya Dryomov Reviewed-by: Josh Durgin Signed-off-by: Sasha Levin commit c0cdea628d04f529189ae4fa84ce7abfefe938e3 Author: Filipe Manana Date: Mon Nov 9 18:06:38 2015 +0000 Btrfs: fix race when listing an inode's xattrs [ Upstream commit f1cd1f0b7d1b5d4aaa5711e8f4e4898b0045cb6d ] When listing a inode's xattrs we have a time window where we race against a concurrent operation for adding a new hard link for our inode that makes us not return any xattr to user space. In order for this to happen, the first xattr of our inode needs to be at slot 0 of a leaf and the previous leaf must still have room for an inode ref (or extref) item, and this can happen because an inode's listxattrs callback does not lock the inode's i_mutex (nor does the VFS does it for us), but adding a hard link to an inode makes the VFS lock the inode's i_mutex before calling the inode's link callback. If we have the following leafs: Leaf X (has N items) Leaf Y [ ... (257 INODE_ITEM 0) (257 INODE_REF 256) ] [ (257 XATTR_ITEM 12345), ... ] slot N - 2 slot N - 1 slot 0 The race illustrated by the following sequence diagram is possible: CPU 1 CPU 2 btrfs_listxattr() searches for key (257 XATTR_ITEM 0) gets path with path->nodes[0] == leaf X and path->slots[0] == N because path->slots[0] is >= btrfs_header_nritems(leaf X), it calls btrfs_next_leaf() btrfs_next_leaf() releases the path adds key (257 INODE_REF 666) to the end of leaf X (slot N), and leaf X now has N + 1 items searches for the key (257 INODE_REF 256), with path->keep_locks == 1, because that is the last key it saw in leaf X before releasing the path ends up at leaf X again and it verifies that the key (257 INODE_REF 256) is no longer the last key in leaf X, so it returns with path->nodes[0] == leaf X and path->slots[0] == N, pointing to the new item with key (257 INODE_REF 666) btrfs_listxattr's loop iteration sees that the type of the key pointed by the path is different from the type BTRFS_XATTR_ITEM_KEY and so it breaks the loop and stops looking for more xattr items --> the application doesn't get any xattr listed for our inode So fix this by breaking the loop only if the key's type is greater than BTRFS_XATTR_ITEM_KEY and skip the current key if its type is smaller. Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana Signed-off-by: Sasha Levin commit 938165ad1c0617047358cb0f0a2b27e7f92e8eca Author: Filipe Manana Date: Mon Nov 9 00:33:58 2015 +0000 Btrfs: fix race leading to BUG_ON when running delalloc for nodatacow [ Upstream commit 1d512cb77bdbda80f0dd0620a3b260d697fd581d ] If we are using the NO_HOLES feature, we have a tiny time window when running delalloc for a nodatacow inode where we can race with a concurrent link or xattr add operation leading to a BUG_ON. This happens because at run_delalloc_nocow() we end up casting a leaf item of type BTRFS_INODE_[REF|EXTREF]_KEY or of type BTRFS_XATTR_ITEM_KEY to a file extent item (struct btrfs_file_extent_item) and then analyse its extent type field, which won't match any of the expected extent types (values BTRFS_FILE_EXTENT_[REG|PREALLOC|INLINE]) and therefore trigger an explicit BUG_ON(1). The following sequence diagram shows how the race happens when running a no-cow dellaloc range [4K, 8K[ for inode 257 and we have the following neighbour leafs: Leaf X (has N items) Leaf Y [ ... (257 INODE_ITEM 0) (257 INODE_REF 256) ] [ (257 EXTENT_DATA 8192), ... ] slot N - 2 slot N - 1 slot 0 (Note the implicit hole for inode 257 regarding the [0, 8K[ range) CPU 1 CPU 2 run_dealloc_nocow() btrfs_lookup_file_extent() --> searches for a key with value (257 EXTENT_DATA 4096) in the fs/subvol tree --> returns us a path with path->nodes[0] == leaf X and path->slots[0] == N because path->slots[0] is >= btrfs_header_nritems(leaf X), it calls btrfs_next_leaf() btrfs_next_leaf() --> releases the path hard link added to our inode, with key (257 INODE_REF 500) added to the end of leaf X, so leaf X now has N + 1 keys --> searches for the key (257 INODE_REF 256), because it was the last key in leaf X before it released the path, with path->keep_locks set to 1 --> ends up at leaf X again and it verifies that the key (257 INODE_REF 256) is no longer the last key in the leaf, so it returns with path->nodes[0] == leaf X and path->slots[0] == N, pointing to the new item with key (257 INODE_REF 500) the loop iteration of run_dealloc_nocow() does not break out the loop and continues because the key referenced in the path at path->nodes[0] and path->slots[0] is for inode 257, its type is < BTRFS_EXTENT_DATA_KEY and its offset (500) is less then our delalloc range's end (8192) the item pointed by the path, an inode reference item, is (incorrectly) interpreted as a file extent item and we get an invalid extent type, leading to the BUG_ON(1): if (extent_type == BTRFS_FILE_EXTENT_REG || extent_type == BTRFS_FILE_EXTENT_PREALLOC) { (...) } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) { (...) } else { BUG_ON(1) } The same can happen if a xattr is added concurrently and ends up having a key with an offset smaller then the delalloc's range end. So fix this by skipping keys with a type smaller than BTRFS_EXTENT_DATA_KEY. Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana Signed-off-by: Sasha Levin commit c03844aec669f455b26f06a75e7d16d36bed32e0 Author: Filipe Manana Date: Fri Nov 6 13:33:33 2015 +0000 Btrfs: fix race leading to incorrect item deletion when dropping extents [ Upstream commit aeafbf8486c9e2bd53f5cc3c10c0b7fd7149d69c ] While running a stress test I got the following warning triggered: [191627.672810] ------------[ cut here ]------------ [191627.673949] WARNING: CPU: 8 PID: 8447 at fs/btrfs/file.c:779 __btrfs_drop_extents+0x391/0xa50 [btrfs]() (...) [191627.701485] Call Trace: [191627.702037] [] dump_stack+0x4f/0x7b [191627.702992] [] ? console_unlock+0x356/0x3a2 [191627.704091] [] warn_slowpath_common+0xa1/0xbb [191627.705380] [] ? __btrfs_drop_extents+0x391/0xa50 [btrfs] [191627.706637] [] warn_slowpath_null+0x1a/0x1c [191627.707789] [] __btrfs_drop_extents+0x391/0xa50 [btrfs] [191627.709155] [] ? cache_alloc_debugcheck_after.isra.32+0x171/0x1d0 [191627.712444] [] ? kmemleak_alloc_recursive.constprop.40+0x16/0x18 [191627.714162] [] insert_reserved_file_extent.constprop.40+0x83/0x24e [btrfs] [191627.715887] [] ? start_transaction+0x3bb/0x610 [btrfs] [191627.717287] [] btrfs_finish_ordered_io+0x273/0x4e2 [btrfs] [191627.728865] [] finish_ordered_fn+0x15/0x17 [btrfs] [191627.730045] [] normal_work_helper+0x14c/0x32c [btrfs] [191627.731256] [] btrfs_endio_write_helper+0x12/0x14 [btrfs] [191627.732661] [] process_one_work+0x24c/0x4ae [191627.733822] [] worker_thread+0x206/0x2c2 [191627.734857] [] ? process_scheduled_works+0x2f/0x2f [191627.736052] [] ? process_scheduled_works+0x2f/0x2f [191627.737349] [] kthread+0xef/0xf7 [191627.738267] [] ? time_hardirqs_on+0x15/0x28 [191627.739330] [] ? __kthread_parkme+0xad/0xad [191627.741976] [] ret_from_fork+0x42/0x70 [191627.743080] [] ? __kthread_parkme+0xad/0xad [191627.744206] ---[ end trace bbfddacb7aaada8d ]--- $ cat -n fs/btrfs/file.c 691 int __btrfs_drop_extents(struct btrfs_trans_handle *trans, (...) 758 btrfs_item_key_to_cpu(leaf, &key, path->slots[0]); 759 if (key.objectid > ino || 760 key.type > BTRFS_EXTENT_DATA_KEY || key.offset >= end) 761 break; 762 763 fi = btrfs_item_ptr(leaf, path->slots[0], 764 struct btrfs_file_extent_item); 765 extent_type = btrfs_file_extent_type(leaf, fi); 766 767 if (extent_type == BTRFS_FILE_EXTENT_REG || 768 extent_type == BTRFS_FILE_EXTENT_PREALLOC) { (...) 774 } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) { (...) 778 } else { 779 WARN_ON(1); 780 extent_end = search_start; 781 } (...) This happened because the item we were processing did not match a file extent item (its key type != BTRFS_EXTENT_DATA_KEY), and even on this case we cast the item to a struct btrfs_file_extent_item pointer and then find a type field value that does not match any of the expected values (BTRFS_FILE_EXTENT_[REG|PREALLOC|INLINE]). This scenario happens due to a tiny time window where a race can happen as exemplified below. For example, consider the following scenario where we're using the NO_HOLES feature and we have the following two neighbour leafs: Leaf X (has N items) Leaf Y [ ... (257 INODE_ITEM 0) (257 INODE_REF 256) ] [ (257 EXTENT_DATA 8192), ... ] slot N - 2 slot N - 1 slot 0 Our inode 257 has an implicit hole in the range [0, 8K[ (implicit rather than explicit because NO_HOLES is enabled). Now if our inode has an ordered extent for the range [4K, 8K[ that is finishing, the following can happen: CPU 1 CPU 2 btrfs_finish_ordered_io() insert_reserved_file_extent() __btrfs_drop_extents() Searches for the key (257 EXTENT_DATA 4096) through btrfs_lookup_file_extent() Key not found and we get a path where path->nodes[0] == leaf X and path->slots[0] == N Because path->slots[0] is >= btrfs_header_nritems(leaf X), we call btrfs_next_leaf() btrfs_next_leaf() releases the path inserts key (257 INODE_REF 4096) at the end of leaf X, leaf X now has N + 1 keys, and the new key is at slot N btrfs_next_leaf() searches for key (257 INODE_REF 256), with path->keep_locks set to 1, because it was the last key it saw in leaf X finds it in leaf X again and notices it's no longer the last key of the leaf, so it returns 0 with path->nodes[0] == leaf X and path->slots[0] == N (which is now < btrfs_header_nritems(leaf X)), pointing to the new key (257 INODE_REF 4096) __btrfs_drop_extents() casts the item at path->nodes[0], slot path->slots[0], to a struct btrfs_file_extent_item - it does not skip keys for the target inode with a type less than BTRFS_EXTENT_DATA_KEY (BTRFS_INODE_REF_KEY < BTRFS_EXTENT_DATA_KEY) sees a bogus value for the type field triggering the WARN_ON in the trace shown above, and sets extent_end = search_start (4096) does the if-then-else logic to fixup 0 length extent items created by a past bug from hole punching: if (extent_end == key.offset && extent_end >= search_start) goto delete_extent_item; that evaluates to true and it ends up deleting the key pointed to by path->slots[0], (257 INODE_REF 4096), from leaf X The same could happen for example for a xattr that ends up having a key with an offset value that matches search_start (very unlikely but not impossible). So fix this by ensuring that keys smaller than BTRFS_EXTENT_DATA_KEY are skipped, never casted to struct btrfs_file_extent_item and never deleted by accident. Also protect against the unexpected case of getting a key for a lower inode number by skipping that key and issuing a warning. Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana Signed-off-by: Sasha Levin commit d7ca88a480683b1589479bd15eeef69402e9934b Author: Filipe Manana Date: Tue Oct 13 15:15:00 2015 +0100 Btrfs: fix file corruption and data loss after cloning inline extents [ Upstream commit 8039d87d9e473aeb740d4fdbd59b9d2f89b2ced9 ] Currently the clone ioctl allows to clone an inline extent from one file to another that already has other (non-inlined) extents. This is a problem because btrfs is not designed to deal with files having inline and regular extents, if a file has an inline extent then it must be the only extent in the file and must start at file offset 0. Having a file with an inline extent followed by regular extents results in EIO errors when doing reads or writes against the first 4K of the file. Also, the clone ioctl allows one to lose data if the source file consists of a single inline extent, with a size of N bytes, and the destination file consists of a single inline extent with a size of M bytes, where we have M > N. In this case the clone operation removes the inline extent from the destination file and then copies the inline extent from the source file into the destination file - we lose the M - N bytes from the destination file, a read operation will get the value 0x00 for any bytes in the the range [N, M] (the destination inode's i_size remained as M, that's why we can read past N bytes). So fix this by not allowing such destructive operations to happen and return errno EOPNOTSUPP to user space. Currently the fstest btrfs/035 tests the data loss case but it totally ignores this - i.e. expects the operation to succeed and does not check the we got data loss. The following test case for fstests exercises all these cases that result in file corruption and data loss: seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" tmp=/tmp/$$ status=1 # failure is the default! trap "_cleanup; exit \$status" 0 1 2 3 15 _cleanup() { rm -f $tmp.* } # get standard environment, filters and checks . ./common/rc . ./common/filter # real QA test starts here _need_to_be_root _supported_fs btrfs _supported_os Linux _require_scratch _require_cloner _require_btrfs_fs_feature "no_holes" _require_btrfs_mkfs_feature "no-holes" rm -f $seqres.full test_cloning_inline_extents() { local mkfs_opts=$1 local mount_opts=$2 _scratch_mkfs $mkfs_opts >>$seqres.full 2>&1 _scratch_mount $mount_opts # File bar, the source for all the following clone operations, consists # of a single inline extent (50 bytes). $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 50" $SCRATCH_MNT/bar \ | _filter_xfs_io # Test cloning into a file with an extent (non-inlined) where the # destination offset overlaps that extent. It should not be possible to # clone the inline extent from file bar into this file. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 16K" $SCRATCH_MNT/foo \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "File foo data after clone operation:" # All bytes should have the value 0xaa (clone operation failed and did # not modify our file). od -t x1 $SCRATCH_MNT/foo $XFS_IO_PROG -c "pwrite -S 0xcc 0 100" $SCRATCH_MNT/foo | _filter_xfs_io # Test cloning the inline extent against a file which has a hole in its # first 4K followed by a non-inlined extent. It should not be possible # as well to clone the inline extent from file bar into this file. $XFS_IO_PROG -f -c "pwrite -S 0xdd 4K 12K" $SCRATCH_MNT/foo2 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo2 # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "File foo2 data after clone operation:" # All bytes should have the value 0x00 (clone operation failed and did # not modify our file). od -t x1 $SCRATCH_MNT/foo2 $XFS_IO_PROG -c "pwrite -S 0xee 0 90" $SCRATCH_MNT/foo2 | _filter_xfs_io # Test cloning the inline extent against a file which has a size of zero # but has a prealloc extent. It should not be possible as well to clone # the inline extent from file bar into this file. $XFS_IO_PROG -f -c "falloc -k 0 1M" $SCRATCH_MNT/foo3 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo3 # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "First 50 bytes of foo3 after clone operation:" # Should not be able to read any bytes, file has 0 bytes i_size (the # clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo3 $XFS_IO_PROG -c "pwrite -S 0xff 0 90" $SCRATCH_MNT/foo3 | _filter_xfs_io # Test cloning the inline extent against a file which consists of a # single inline extent that has a size not greater than the size of # bar's inline extent (40 < 50). # It should be possible to do the extent cloning from bar to this file. $XFS_IO_PROG -f -c "pwrite -S 0x01 0 40" $SCRATCH_MNT/foo4 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo4 # Doing IO against any range in the first 4K of the file should work. echo "File foo4 data after clone operation:" # Must match file bar's content. od -t x1 $SCRATCH_MNT/foo4 $XFS_IO_PROG -c "pwrite -S 0x02 0 90" $SCRATCH_MNT/foo4 | _filter_xfs_io # Test cloning the inline extent against a file which consists of a # single inline extent that has a size greater than the size of bar's # inline extent (60 > 50). # It should not be possible to clone the inline extent from file bar # into this file. $XFS_IO_PROG -f -c "pwrite -S 0x03 0 60" $SCRATCH_MNT/foo5 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo5 # Reading the file should not fail. echo "File foo5 data after clone operation:" # Must have a size of 60 bytes, with all bytes having a value of 0x03 # (the clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo5 # Test cloning the inline extent against a file which has no extents but # has a size greater than bar's inline extent (16K > 50). # It should not be possible to clone the inline extent from file bar # into this file. $XFS_IO_PROG -f -c "truncate 16K" $SCRATCH_MNT/foo6 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo6 # Reading the file should not fail. echo "File foo6 data after clone operation:" # Must have a size of 16K, with all bytes having a value of 0x00 (the # clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo6 # Test cloning the inline extent against a file which has no extents but # has a size not greater than bar's inline extent (30 < 50). # It should be possible to clone the inline extent from file bar into # this file. $XFS_IO_PROG -f -c "truncate 30" $SCRATCH_MNT/foo7 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo7 # Reading the file should not fail. echo "File foo7 data after clone operation:" # Must have a size of 50 bytes, with all bytes having a value of 0xbb. od -t x1 $SCRATCH_MNT/foo7 # Test cloning the inline extent against a file which has a size not # greater than the size of bar's inline extent (20 < 50) but has # a prealloc extent that goes beyond the file's size. It should not be # possible to clone the inline extent from bar into this file. $XFS_IO_PROG -f -c "falloc -k 0 1M" \ -c "pwrite -S 0x88 0 20" \ $SCRATCH_MNT/foo8 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo8 echo "File foo8 data after clone operation:" # Must have a size of 20 bytes, with all bytes having a value of 0x88 # (the clone operation did not modify our file). od -t x1 $SCRATCH_MNT/foo8 _scratch_unmount } echo -e "\nTesting without compression and without the no-holes feature...\n" test_cloning_inline_extents echo -e "\nTesting with compression and without the no-holes feature...\n" test_cloning_inline_extents "" "-o compress" echo -e "\nTesting without compression and with the no-holes feature...\n" test_cloning_inline_extents "-O no-holes" "" echo -e "\nTesting with compression and with the no-holes feature...\n" test_cloning_inline_extents "-O no-holes" "-o compress" status=0 exit Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana Signed-off-by: Sasha Levin commit 96c7b10cd8000973035b667adf62ea1164f4b9b4 Author: Quentin Casasnovas Date: Tue Nov 24 17:13:21 2015 -0500 RDS: fix race condition when sending a message on unbound socket [ Upstream commit 8c7188b23474cca017b3ef354c4a58456f68303a ] Sasha's found a NULL pointer dereference in the RDS connection code when sending a message to an apparently unbound socket. The problem is caused by the code checking if the socket is bound in rds_sendmsg(), which checks the rs_bound_addr field without taking a lock on the socket. This opens a race where rs_bound_addr is temporarily set but where the transport is not in rds_bind(), leading to a NULL pointer dereference when trying to dereference 'trans' in __rds_conn_create(). Vegard wrote a reproducer for this issue, so kindly ask him to share if you're interested. I cannot reproduce the NULL pointer dereference using Vegard's reproducer with this patch, whereas I could without. Complete earlier incomplete fix to CVE-2015-6937: 74e98eb08588 ("RDS: verify the underlying transport exists before creating a connection") Cc: David S. Miller Cc: stable@vger.kernel.org Reviewed-by: Vegard Nossum Reviewed-by: Sasha Levin Acked-by: Santosh Shilimkar Signed-off-by: Quentin Casasnovas Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 72032798034d921ed565e3bf8dfdc3098f6473e2 Author: Rainer Weikusat Date: Fri Nov 20 22:07:23 2015 +0000 unix: avoid use-after-free in ep_remove_wait_queue [ Upstream commit 7d267278a9ece963d77eefec61630223fce08c6c ] Rainer Weikusat writes: An AF_UNIX datagram socket being the client in an n:1 association with some server socket is only allowed to send messages to the server if the receive queue of this socket contains at most sk_max_ack_backlog datagrams. This implies that prospective writers might be forced to go to sleep despite none of the message presently enqueued on the server receive queue were sent by them. In order to ensure that these will be woken up once space becomes again available, the present unix_dgram_poll routine does a second sock_poll_wait call with the peer_wait wait queue of the server socket as queue argument (unix_dgram_recvmsg does a wake up on this queue after a datagram was received). This is inherently problematic because the server socket is only guaranteed to remain alive for as long as the client still holds a reference to it. In case the connection is dissolved via connect or by the dead peer detection logic in unix_dgram_sendmsg, the server socket may be freed despite "the polling mechanism" (in particular, epoll) still has a pointer to the corresponding peer_wait queue. There's no way to forcibly deregister a wait queue with epoll. Based on an idea by Jason Baron, the patch below changes the code such that a wait_queue_t belonging to the client socket is enqueued on the peer_wait queue of the server whenever the peer receive queue full condition is detected by either a sendmsg or a poll. A wake up on the peer queue is then relayed to the ordinary wait queue of the client socket via wake function. The connection to the peer wait queue is again dissolved if either a wake up is about to be relayed or the client socket reconnects or a dead peer is detected or the client socket is itself closed. This enables removing the second sock_poll_wait from unix_dgram_poll, thus avoiding the use-after-free, while still ensuring that no blocked writer sleeps forever. Signed-off-by: Rainer Weikusat Fixes: ec0d215f9420 ("af_unix: fix 'poll for write'/connected DGRAM sockets") Reviewed-by: Jason Baron Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 2950f7c83792442a56c6bcc6a941af33bbe686a5 Author: Rainer Weikusat Date: Wed Dec 16 20:09:25 2015 +0000 af_unix: Revert 'lock_interruptible' in stream receive code [ Upstream commit 3822b5c2fc62e3de8a0f33806ff279fb7df92432 ] With b3ca9b02b00704053a38bfe4c31dbbb9c13595d0, the AF_UNIX SOCK_STREAM receive code was changed from using mutex_lock(&u->readlock) to mutex_lock_interruptible(&u->readlock) to prevent signals from being delayed for an indefinite time if a thread sleeping on the mutex happened to be selected for handling the signal. But this was never a problem with the stream receive code (as opposed to its datagram counterpart) as that never went to sleep waiting for new messages with the mutex held and thus, wouldn't cause secondary readers to block on the mutex waiting for the sleeping primary reader. As the interruptible locking makes the code more complicated in exchange for no benefit, change it back to using mutex_lock. Signed-off-by: Rainer Weikusat Acked-by: Hannes Frederic Sowa Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 49d0edfda3a11d364b434d48ec92127d5f9ede38 Author: Hannes Frederic Sowa Date: Tue Dec 15 21:01:53 2015 +0100 fou: clean up socket with kfree_rcu [ Upstream commit 3036facbb7be3a169e35be3b271162b0fa564a2d ] fou->udp_offloads is managed by RCU. As it is actually included inside the fou sockets, we cannot let the memory go out of scope before a grace period. We either can synchronize_rcu or switch over to kfree_rcu to manage the sockets. kfree_rcu seems appropriate as it is used by vxlan and geneve. Fixes: 23461551c00628c ("fou: Support for foo-over-udp RX path") Cc: Tom Herbert Signed-off-by: Hannes Frederic Sowa Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 566198569555189eff9c11c67bbaefb1bacc7bfa Author: David S. Miller Date: Tue Dec 15 15:39:08 2015 -0500 bluetooth: Validate socket address length in sco_sock_bind(). [ Upstream commit 5233252fce714053f0151680933571a2da9cbfb4 ] Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 652ed6f6effe13ce2fc0215230517aa01bdbf3e3 Author: WANG Cong Date: Mon Dec 14 13:48:36 2015 -0800 pptp: verify sockaddr_len in pptp_bind() and pptp_connect() [ Upstream commit 09ccfd238e5a0e670d8178cf50180ea81ae09ae1 ] Reported-by: Dmitry Vyukov Signed-off-by: Cong Wang Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit eb2c241e08e87ca7455da70608da5032b7f27423 Author: Vlad Yasevich Date: Mon Dec 14 17:44:10 2015 -0500 skbuff: Fix offset error in skb_reorder_vlan_header [ Upstream commit f654861569872d10dcb79d9d7ca219b316f94ff0 ] skb_reorder_vlan_header is called after the vlan header has been pulled. As a result the offset of the begining of the mac header has been incrased by 4 bytes (VLAN_HLEN). When moving the mac addresses, include this incrase in the offset calcualation so that the mac addresses are copied correctly. Fixes: a6e18ff1117 (vlan: Fix untag operations of stacked vlans with REORDER_HEADER off) CC: Nicolas Dichtel CC: Patrick McHardy Signed-off-by: Vladislav Yasevich Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 6ff6d3c9349681cbe2559a6f47c93a8c716ece1d Author: Vlad Yasevich Date: Mon Nov 16 15:43:44 2015 -0500 vlan: Fix untag operations of stacked vlans with REORDER_HEADER off [ Upstream commit a6e18ff111701b4ff6947605bfbe9594ec42a6e8 ] When we have multiple stacked vlan devices all of which have turned off REORDER_HEADER flag, the untag operation does not locate the ethernet addresses correctly for nested vlans. The reason is that in case of REORDER_HEADER flag being off, the outer vlan headers are put back and the mac_len is adjusted to account for the presense of the header. Then, the subsequent untag operation, for the next level vlan, always use VLAN_ETH_HLEN to locate the begining of the ethernet header and that ends up being a multiple of 4 bytes short of the actuall beginning of the mac header (the multiple depending on the how many vlan encapsulations ethere are). As a reslult, if there are multiple levles of vlan devices with REODER_HEADER being off, the recevied packets end up being dropped. To solve this, we use skb->mac_len as the offset. The value is always set on receive path and starts out as a ETH_HLEN. The value is also updated when the vlan header manupations occur so we know it will be correct. Signed-off-by: Vladislav Yasevich Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 54b6eaa343c145b4a6aafa31025cf6d3baaac324 Author: Eric Dumazet Date: Mon Dec 14 14:08:53 2015 -0800 net: fix IP early demux races [ Upstream commit 5037e9ef9454917b047f9f3a19b4dd179fbf7cd4 ] David Wilder reported crashes caused by dst reuse. I am seeing a crash on a distro V4.2.3 kernel caused by a double release of a dst_entry. In ipv4_dst_destroy() the call to list_empty() finds a poisoned next pointer, indicating the dst_entry has already been removed from the list and freed. The crash occurs 18 to 24 hours into a run of a network stress exerciser. Thanks to his detailed report and analysis, we were able to understand the core issue. IP early demux can associate a dst to skb, after a lookup in TCP/UDP sockets. When socket cache is not properly set, we want to store into sk->sk_dst_cache the dst for future IP early demux lookups, by acquiring a stable refcount on the dst. Problem is this acquisition is simply using an atomic_inc(), which works well, unless the dst was queued for destruction from dst_release() noticing dst refcount went to zero, if DST_NOCACHE was set on dst. We need to make sure current refcount is not zero before incrementing it, or risk double free as David reported. This patch, being a stable candidate, adds two new helpers, and use them only from IP early demux problematic paths. It might be possible to merge in net-next skb_dst_force() and skb_dst_force_safe(), but I prefer having the smallest patch for stable kernels : Maybe some skb_dst_force() callers do not expect skb->dst can suddenly be cleared. Can probably be backported back to linux-3.6 kernels Reported-by: David J. Wilder Tested-by: David J. Wilder Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit ff9b245372ffaba7a64563521474a882d9d72dfd Author: Sergei Shtylyov Date: Fri Dec 4 01:45:40 2015 +0300 sh_eth: fix kernel oops in skb_put() [ Upstream commit 248be83dcb3feb3f6332eb3d010a016402138484 ] In a low memory situation the following kernel oops occurs: Unable to handle kernel NULL pointer dereference at virtual address 00000050 pgd = 8490c000 [00000050] *pgd=4651e831, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT ARM Modules linked in: CPU: 0 Not tainted (3.4-at16 #9) PC is at skb_put+0x10/0x98 LR is at sh_eth_poll+0x2c8/0xa10 pc : [<8035f780>] lr : [<8028bf50>] psr: 60000113 sp : 84eb1a90 ip : 84eb1ac8 fp : 84eb1ac4 r10: 0000003f r9 : 000005ea r8 : 00000000 r7 : 00000000 r6 : 940453b0 r5 : 00030000 r4 : 9381b180 r3 : 00000000 r2 : 00000000 r1 : 000005ea r0 : 00000000 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c53c7d Table: 4248c059 DAC: 00000015 Process klogd (pid: 2046, stack limit = 0x84eb02e8) [...] This is because netdev_alloc_skb() fails and 'mdp->rx_skbuff[entry]' is left NULL but sh_eth_rx() later uses it without checking. Add such check... Reported-by: Yasushi SHOJI Signed-off-by: Sergei Shtylyov Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit e60ccfd9e596b48d4b9d6e2b5440261c83d10c12 Author: Hannes Frederic Sowa Date: Mon Dec 14 22:03:39 2015 +0100 net: add validation for the socket syscall protocol argument [ Upstream commit 79462ad02e861803b3840cc782248c7359451cd9 ] 郭永刚 reported that one could simply crash the kernel as root by using a simple program: int socket_fd; struct sockaddr_in addr; addr.sin_port = 0; addr.sin_addr.s_addr = INADDR_ANY; addr.sin_family = 10; socket_fd = socket(10,3,0x40000000); connect(socket_fd , &addr,16); AF_INET, AF_INET6 sockets actually only support 8-bit protocol identifiers. inet_sock's skc_protocol field thus is sized accordingly, thus larger protocol identifiers simply cut off the higher bits and store a zero in the protocol fields. This could lead to e.g. NULL function pointer because as a result of the cut off inet_num is zero and we call down to inet_autobind, which is NULL for raw sockets. kernel: Call Trace: kernel: [] ? inet_autobind+0x2e/0x70 kernel: [] inet_dgram_connect+0x54/0x80 kernel: [] SYSC_connect+0xd9/0x110 kernel: [] ? ptrace_notify+0x5b/0x80 kernel: [] ? syscall_trace_enter_phase2+0x108/0x200 kernel: [] SyS_connect+0xe/0x10 kernel: [] tracesys_phase2+0x84/0x89 I found no particular commit which introduced this problem. CVE: CVE-2015-8543 Cc: Cong Wang Reported-by: 郭永刚 Signed-off-by: Hannes Frederic Sowa Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit d6d2547876dd3c44d1359c522d2943f05b2d44bf Author: Eric Dumazet Date: Wed Dec 9 07:25:06 2015 -0800 ipv6: sctp: clone options to avoid use after free [ Upstream commit 9470e24f35ab81574da54e69df90c1eb4a96b43f ] SCTP is lacking proper np->opt cloning at accept() time. TCP and DCCP use ipv6_dup_options() helper, do the same in SCTP. We might later factorize this code in a common helper to avoid future mistakes. Reported-by: Dmitry Vyukov Signed-off-by: Eric Dumazet Acked-by: Vlad Yasevich Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit d2ed2c315f1e31f88dde79611f35012516f546a4 Author: Stefan Wahren Date: Fri Dec 4 16:29:10 2015 +0100 net: qca_spi: fix transmit queue timeout handling [ Upstream commit ed7d42e24effbd3681e909711a7a2119a85e9217 ] In case of a tx queue timeout every transmit is blocked until the QCA7000 resets himself and triggers a sync which makes the driver flushs the tx ring. So avoid this blocking situation by triggering the sync immediately after the timeout. Waking the queue doesn't make sense in this situation. Signed-off-by: Stefan Wahren Fixes: 291ab06ecf67 ("net: qualcomm: new Ethernet over SPI driver for QCA7000") Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 9ee9284e62b25ee233eb809ed2e8696584753c9b Author: Marcelo Ricardo Leitner Date: Fri Dec 4 15:14:05 2015 -0200 sctp: also copy sk_tsflags when copying the socket [ Upstream commit 50a5ffb1ef535e3c6989711c51b5d61b543a3b45 ] As we are keeping timestamps on when copying the socket, we also have to copy sk_tsflags. This is needed since b9f40e21ef42 ("net-timestamp: move timestamp flags out of sk_flags"). Signed-off-by: Marcelo Ricardo Leitner Acked-by: Vlad Yasevich Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 34c664a49b82ce45cb5299ab683ee2c523ab0d57 Author: Marcelo Ricardo Leitner Date: Fri Dec 4 15:14:04 2015 -0200 sctp: update the netstamp_needed counter when copying sockets [ Upstream commit 01ce63c90170283a9855d1db4fe81934dddce648 ] Dmitry Vyukov reported that SCTP was triggering a WARN on socket destroy related to disabling sock timestamp. When SCTP accepts an association or peel one off, it copies sock flags but forgot to call net_enable_timestamp() if a packet timestamping flag was copied, leading to extra calls to net_disable_timestamp() whenever such clones were closed. The fix is to call net_enable_timestamp() whenever we copy a sock with that flag on, like tcp does. Reported-by: Dmitry Vyukov Signed-off-by: Marcelo Ricardo Leitner Acked-by: Vlad Yasevich Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 348058993dbe9778123657fb0b790ae43966688c Author: Marcelo Ricardo Leitner Date: Fri Dec 4 15:14:03 2015 -0200 sctp: use the same clock as if sock source timestamps were on [ Upstream commit cb5e173ed7c03a0d4630ce68a95a186cce3cc872 ] SCTP echoes a cookie o INIT ACK chunks that contains a timestamp, for detecting stale cookies. This cookie is echoed back to the server by the client and then that timestamp is checked. Thing is, if the listening socket is using packet timestamping, the cookie is encoded with ktime_get() value and checked against ktime_get_real(), as done by __net_timestamp(). The fix is to sctp also use ktime_get_real(), so we can compare bananas with bananas later no matter if packet timestamping was enabled or not. Fixes: 52db882f3fc2 ("net: sctp: migrate cookie life from timeval to ktime") Signed-off-by: Marcelo Ricardo Leitner Acked-by: Vlad Yasevich Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 0990543936c90184c48e875f0afe3b09ef444493 Author: Pavel Machek Date: Fri Dec 4 09:50:00 2015 +0100 atl1c: Improve driver not to do order 4 GFP_ATOMIC allocation [ Upstream commit f2a3771ae8aca879c32336c76ad05a017629bae2 ] atl1c driver is doing order-4 allocation with GFP_ATOMIC priority. That often breaks networking after resume. Switch to GFP_KERNEL. Still not ideal, but should be significantly better. atl1c_setup_ring_resources() is called from .open() function, and already uses GFP_KERNEL, so this change is safe. Signed-off-by: Pavel Machek Acked-by: Michal Hocko Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 67041f12b30a302692501cacf7aa53ac80ae10a0 Author: Nicolas Dichtel Date: Thu Dec 3 17:21:50 2015 +0100 gre6: allow to update all parameters via rtnl [ Upstream commit 6a61d4dbf4f54b5683e0f1e58d873cecca7cb977 ] Parameters were updated only if the kernel was unable to find the tunnel with the new parameters, ie only if core pamareters were updated (keys, addr, link, type). Now it's possible to update ttl, hoplimit, flowinfo and flags. Fixes: c12b395a4664 ("gre: Support GRE over IPv6") Signed-off-by: Nicolas Dichtel Signed-off-by: David S. Miller Signed-off-by: Sasha Levin