2019-05-27 08:55:05 +02:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2007-05-07 20:33:32 -04:00
|
|
|
/*
|
|
|
|
|
* Copyright (C) 2005-2007 Kristian Hoegsberg <krh@bitplanet.net>
|
2006-12-19 19:58:27 -05:00
|
|
|
*/
|
|
|
|
|
|
2009-06-04 21:09:38 +02:00
|
|
|
#include <linux/bug.h>
|
2008-05-24 16:50:22 +02:00
|
|
|
#include <linux/completion.h>
|
|
|
|
|
#include <linux/crc-itu-t.h>
|
2006-12-19 19:58:27 -05:00
|
|
|
#include <linux/device.h>
|
2008-05-24 16:50:22 +02:00
|
|
|
#include <linux/errno.h>
|
firewire: reorganize header files
The three header files of firewire-core, i.e.
"drivers/firewire/fw-device.h",
"drivers/firewire/fw-topology.h",
"drivers/firewire/fw-transaction.h",
are replaced by
"drivers/firewire/core.h",
"include/linux/firewire.h".
The latter includes everything which a firewire high-level driver (like
firewire-sbp2) needs besides linux/firewire-constants.h, while core.h
contains the rest which is needed by firewire-core itself and by low-
level drivers (card drivers) like firewire-ohci.
High-level drivers can now also reside outside of drivers/firewire
without having to add drivers/firewire to the header file search path in
makefiles. At least the firedtv driver will be such a driver.
I also considered to spread the contents of core.h over several files,
one for each .c file where the respective implementation resides. But
it turned out that most core .c files will end up including most of the
core .h files. Also, the combined core.h isn't unreasonably big, and it
will lose more of its contents to linux/firewire.h anyway soon when more
firewire drivers are added. (IP-over-1394, firedtv, and there are plans
for one or two more.)
Furthermore, fw-ohci.h is renamed to ohci.h. The name of core.h and
ohci.h is chosen with regard to name changes of the .c files in a
follow-up change.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
2009-06-05 16:26:18 +02:00
|
|
|
#include <linux/firewire.h>
|
|
|
|
|
#include <linux/firewire-constants.h>
|
2009-06-04 21:09:38 +02:00
|
|
|
#include <linux/jiffies.h>
|
|
|
|
|
#include <linux/kernel.h>
|
2008-05-24 16:50:22 +02:00
|
|
|
#include <linux/kref.h>
|
2009-06-04 21:09:38 +02:00
|
|
|
#include <linux/list.h>
|
2008-05-24 16:50:22 +02:00
|
|
|
#include <linux/module.h>
|
2007-05-07 20:33:33 -04:00
|
|
|
#include <linux/mutex.h>
|
2009-06-04 21:09:38 +02:00
|
|
|
#include <linux/spinlock.h>
|
|
|
|
|
#include <linux/workqueue.h>
|
|
|
|
|
|
2011-07-26 16:09:06 -07:00
|
|
|
#include <linux/atomic.h>
|
2009-06-04 21:09:38 +02:00
|
|
|
#include <asm/byteorder.h>
|
2008-05-24 16:50:22 +02:00
|
|
|
|
firewire: reorganize header files
The three header files of firewire-core, i.e.
"drivers/firewire/fw-device.h",
"drivers/firewire/fw-topology.h",
"drivers/firewire/fw-transaction.h",
are replaced by
"drivers/firewire/core.h",
"include/linux/firewire.h".
The latter includes everything which a firewire high-level driver (like
firewire-sbp2) needs besides linux/firewire-constants.h, while core.h
contains the rest which is needed by firewire-core itself and by low-
level drivers (card drivers) like firewire-ohci.
High-level drivers can now also reside outside of drivers/firewire
without having to add drivers/firewire to the header file search path in
makefiles. At least the firedtv driver will be such a driver.
I also considered to spread the contents of core.h over several files,
one for each .c file where the respective implementation resides. But
it turned out that most core .c files will end up including most of the
core .h files. Also, the combined core.h isn't unreasonably big, and it
will lose more of its contents to linux/firewire.h anyway soon when more
firewire drivers are added. (IP-over-1394, firedtv, and there are plans
for one or two more.)
Furthermore, fw-ohci.h is renamed to ohci.h. The name of core.h and
ohci.h is chosen with regard to name changes of the .c files in a
follow-up change.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
2009-06-05 16:26:18 +02:00
|
|
|
#include "core.h"
|
2024-05-01 16:32:36 +09:00
|
|
|
#include <trace/events/firewire.h>
|
2006-12-19 19:58:27 -05:00
|
|
|
|
firewire: core: prefix log messages with card name
Associate all log messages from firewire-core with the respective card
because some people have more than one card. E.g.
firewire_ohci 0000:04:00.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
firewire_ohci 0000:05:00.0: added OHCI v1.10 device as card 1, 8 IR + 8 IT contexts, quirks 0x0
firewire_core: created device fw0: GUID 0814438400000389, S800
firewire_core: phy config: new root=ffc1, gap_count=5
firewire_core: created device fw1: GUID 0814438400000388, S800
firewire_core: created device fw2: GUID 0001d202e06800d1, S800
turns into
firewire_ohci 0000:04:00.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
firewire_ohci 0000:05:00.0: added OHCI v1.10 device as card 1, 8 IR + 8 IT contexts, quirks 0x0
firewire_core 0000:04:00.0: created device fw0: GUID 0814438400000389, S800
firewire_core 0000:04:00.0: phy config: new root=ffc1, gap_count=5
firewire_core 0000:05:00.0: created device fw1: GUID 0814438400000388, S800
firewire_core 0000:04:00.0: created device fw2: GUID 0001d202e06800d1, S800
This increases the module size slightly; to keep this in check, turn the
former printk wrapper macros into functions. Their implementation is
largely copied from driver core's dev_printk counterparts.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
2012-02-18 22:03:14 +01:00
|
|
|
#define define_fw_printk_level(func, kern_level) \
|
|
|
|
|
void func(const struct fw_card *card, const char *fmt, ...) \
|
|
|
|
|
{ \
|
|
|
|
|
struct va_format vaf; \
|
|
|
|
|
va_list args; \
|
|
|
|
|
\
|
|
|
|
|
va_start(args, fmt); \
|
|
|
|
|
vaf.fmt = fmt; \
|
|
|
|
|
vaf.va = &args; \
|
|
|
|
|
printk(kern_level KBUILD_MODNAME " %s: %pV", \
|
|
|
|
|
dev_name(card->device), &vaf); \
|
|
|
|
|
va_end(args); \
|
|
|
|
|
}
|
|
|
|
|
define_fw_printk_level(fw_err, KERN_ERR);
|
|
|
|
|
define_fw_printk_level(fw_notice, KERN_NOTICE);
|
|
|
|
|
|
2009-10-08 00:42:53 +02:00
|
|
|
int fw_compute_block_crc(__be32 *block)
|
2009-10-08 00:41:59 +02:00
|
|
|
{
|
|
|
|
|
int length;
|
|
|
|
|
u16 crc;
|
|
|
|
|
|
|
|
|
|
length = (be32_to_cpu(block[0]) >> 16) & 0xff;
|
|
|
|
|
crc = crc_itu_t(0, (u8 *)&block[1], length * 4);
|
|
|
|
|
*block |= cpu_to_be32(crc);
|
|
|
|
|
|
|
|
|
|
return length;
|
|
|
|
|
}
|
|
|
|
|
|
2007-05-07 20:33:33 -04:00
|
|
|
static DEFINE_MUTEX(card_mutex);
|
2006-12-19 19:58:27 -05:00
|
|
|
static LIST_HEAD(card_list);
|
|
|
|
|
|
|
|
|
|
static LIST_HEAD(descriptor_list);
|
|
|
|
|
static int descriptor_count;
|
|
|
|
|
|
2009-10-08 00:42:27 +02:00
|
|
|
static __be32 tmp_config_rom[256];
|
2010-01-24 14:47:02 +01:00
|
|
|
/* ROM header, bus info block, root dir header, capabilities = 7 quadlets */
|
|
|
|
|
static size_t config_rom_length = 1 + 4 + 1 + 1;
|
2009-10-08 00:42:27 +02:00
|
|
|
|
2007-05-07 20:33:35 -04:00
|
|
|
#define BIB_CRC(v) ((v) << 0)
|
|
|
|
|
#define BIB_CRC_LENGTH(v) ((v) << 16)
|
|
|
|
|
#define BIB_INFO_LENGTH(v) ((v) << 24)
|
2010-04-14 22:30:18 +02:00
|
|
|
#define BIB_BUS_NAME 0x31333934 /* "1394" */
|
2007-05-07 20:33:35 -04:00
|
|
|
#define BIB_LINK_SPEED(v) ((v) << 0)
|
|
|
|
|
#define BIB_GENERATION(v) ((v) << 4)
|
|
|
|
|
#define BIB_MAX_ROM(v) ((v) << 8)
|
|
|
|
|
#define BIB_MAX_RECEIVE(v) ((v) << 12)
|
|
|
|
|
#define BIB_CYC_CLK_ACC(v) ((v) << 16)
|
|
|
|
|
#define BIB_PMC ((1) << 27)
|
|
|
|
|
#define BIB_BMC ((1) << 28)
|
|
|
|
|
#define BIB_ISC ((1) << 29)
|
|
|
|
|
#define BIB_CMC ((1) << 30)
|
2010-04-14 22:30:18 +02:00
|
|
|
#define BIB_IRMC ((1) << 31)
|
|
|
|
|
#define NODE_CAPABILITIES 0x0c0083c0 /* per IEEE 1394 clause 8.3.2.6.5.2 */
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2011-03-07 11:21:15 +01:00
|
|
|
/*
|
|
|
|
|
* IEEE-1394 specifies a default SPLIT_TIMEOUT value of 800 cycles (100 ms),
|
|
|
|
|
* but we have to make it longer because there are many devices whose firmware
|
|
|
|
|
* is just too slow for that.
|
|
|
|
|
*/
|
|
|
|
|
#define DEFAULT_SPLIT_TIMEOUT (2 * 8000)
|
|
|
|
|
|
2011-01-15 18:19:48 +01:00
|
|
|
#define CANON_OUI 0x000085
|
|
|
|
|
|
2010-01-24 14:47:02 +01:00
|
|
|
static void generate_config_rom(struct fw_card *card, __be32 *config_rom)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
|
|
|
|
struct fw_descriptor *desc;
|
2009-10-08 00:41:59 +02:00
|
|
|
int i, j, k, length;
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2007-05-07 20:33:32 -04:00
|
|
|
/*
|
|
|
|
|
* Initialize contents of config rom buffer. On the OHCI
|
2007-01-21 20:44:09 +01:00
|
|
|
* controller, block reads to the config rom accesses the host
|
|
|
|
|
* memory, but quadlet read access the hardware bus info block
|
|
|
|
|
* registers. That's just crack, but it means we should make
|
2008-10-22 15:59:42 -04:00
|
|
|
* sure the contents of bus info block in host memory matches
|
2007-05-07 20:33:32 -04:00
|
|
|
* the version stored in the OHCI registers.
|
|
|
|
|
*/
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2009-10-08 00:41:59 +02:00
|
|
|
config_rom[0] = cpu_to_be32(
|
|
|
|
|
BIB_CRC_LENGTH(4) | BIB_INFO_LENGTH(4) | BIB_CRC(0));
|
2010-04-14 22:30:18 +02:00
|
|
|
config_rom[1] = cpu_to_be32(BIB_BUS_NAME);
|
2009-10-08 00:41:59 +02:00
|
|
|
config_rom[2] = cpu_to_be32(
|
2007-05-07 20:33:35 -04:00
|
|
|
BIB_LINK_SPEED(card->link_speed) |
|
|
|
|
|
BIB_GENERATION(card->config_rom_generation++ % 14 + 2) |
|
|
|
|
|
BIB_MAX_ROM(2) |
|
|
|
|
|
BIB_MAX_RECEIVE(card->max_receive) |
|
2010-04-14 22:30:18 +02:00
|
|
|
BIB_BMC | BIB_ISC | BIB_CMC | BIB_IRMC);
|
2009-10-08 00:41:59 +02:00
|
|
|
config_rom[3] = cpu_to_be32(card->guid >> 32);
|
|
|
|
|
config_rom[4] = cpu_to_be32(card->guid);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
|
|
|
|
/* Generate root directory. */
|
2010-04-14 22:30:18 +02:00
|
|
|
config_rom[6] = cpu_to_be32(NODE_CAPABILITIES);
|
2009-10-08 00:41:59 +02:00
|
|
|
i = 7;
|
|
|
|
|
j = 7 + descriptor_count;
|
2006-12-19 19:58:27 -05:00
|
|
|
|
|
|
|
|
/* Generate root directory entries for descriptors. */
|
|
|
|
|
list_for_each_entry (desc, &descriptor_list, link) {
|
2007-03-07 12:12:36 -05:00
|
|
|
if (desc->immediate > 0)
|
2009-10-08 00:41:59 +02:00
|
|
|
config_rom[i++] = cpu_to_be32(desc->immediate);
|
|
|
|
|
config_rom[i] = cpu_to_be32(desc->key | (j - i));
|
2006-12-19 19:58:27 -05:00
|
|
|
i++;
|
|
|
|
|
j += desc->length;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Update root directory length. */
|
2009-10-08 00:41:59 +02:00
|
|
|
config_rom[5] = cpu_to_be32((i - 5 - 1) << 16);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
|
|
|
|
/* End of root directory, now copy in descriptors. */
|
|
|
|
|
list_for_each_entry (desc, &descriptor_list, link) {
|
2009-10-08 00:41:59 +02:00
|
|
|
for (k = 0; k < desc->length; k++)
|
|
|
|
|
config_rom[i + k] = cpu_to_be32(desc->data[k]);
|
2006-12-19 19:58:27 -05:00
|
|
|
i += desc->length;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Calculate CRCs for all blocks in the config rom. This
|
|
|
|
|
* assumes that CRC length and info length are identical for
|
|
|
|
|
* the bus info block, which is always the case for this
|
|
|
|
|
* implementation. */
|
2007-05-07 20:33:31 -04:00
|
|
|
for (i = 0; i < j; i += length + 1)
|
2009-10-08 00:42:53 +02:00
|
|
|
length = fw_compute_block_crc(config_rom + i);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2010-01-24 14:47:02 +01:00
|
|
|
WARN_ON(j != config_rom_length);
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
static void update_config_roms(void)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
|
|
|
|
struct fw_card *card;
|
|
|
|
|
|
|
|
|
|
list_for_each_entry (card, &card_list, link) {
|
2010-01-24 14:47:02 +01:00
|
|
|
generate_config_rom(card, tmp_config_rom);
|
|
|
|
|
card->driver->set_config_rom(card, tmp_config_rom,
|
|
|
|
|
config_rom_length);
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2010-01-24 14:47:02 +01:00
|
|
|
static size_t required_space(struct fw_descriptor *desc)
|
|
|
|
|
{
|
|
|
|
|
/* descriptor + entry into root dir + optional immediate entry */
|
|
|
|
|
return desc->length + 1 + (desc->immediate > 0 ? 1 : 0);
|
|
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
int fw_core_add_descriptor(struct fw_descriptor *desc)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
|
|
|
|
size_t i;
|
|
|
|
|
|
2007-05-07 20:33:32 -04:00
|
|
|
/*
|
|
|
|
|
* Check descriptor is valid; the length of all blocks in the
|
2006-12-19 19:58:27 -05:00
|
|
|
* descriptor has to add up to exactly the length of the
|
2007-05-07 20:33:32 -04:00
|
|
|
* block.
|
|
|
|
|
*/
|
2006-12-19 19:58:27 -05:00
|
|
|
i = 0;
|
|
|
|
|
while (i < desc->length)
|
|
|
|
|
i += (desc->data[i] >> 16) + 1;
|
|
|
|
|
|
|
|
|
|
if (i != desc->length)
|
2007-03-28 21:26:42 +02:00
|
|
|
return -EINVAL;
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2024-08-05 17:53:53 +09:00
|
|
|
guard(mutex)(&card_mutex);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2024-08-05 17:53:53 +09:00
|
|
|
if (config_rom_length + required_space(desc) > 256)
|
|
|
|
|
return -EBUSY;
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2024-08-05 17:53:53 +09:00
|
|
|
list_add_tail(&desc->link, &descriptor_list);
|
|
|
|
|
config_rom_length += required_space(desc);
|
|
|
|
|
descriptor_count++;
|
|
|
|
|
if (desc->immediate > 0)
|
|
|
|
|
descriptor_count++;
|
|
|
|
|
update_config_roms();
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2024-08-05 17:53:53 +09:00
|
|
|
return 0;
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
2009-05-18 13:08:06 -04:00
|
|
|
EXPORT_SYMBOL(fw_core_add_descriptor);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
void fw_core_remove_descriptor(struct fw_descriptor *desc)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
2024-08-05 17:53:53 +09:00
|
|
|
guard(mutex)(&card_mutex);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
|
|
|
|
list_del(&desc->link);
|
2010-01-24 14:47:02 +01:00
|
|
|
config_rom_length -= required_space(desc);
|
2006-12-19 19:58:27 -05:00
|
|
|
descriptor_count--;
|
2007-03-07 12:12:36 -05:00
|
|
|
if (desc->immediate > 0)
|
|
|
|
|
descriptor_count--;
|
2006-12-19 19:58:27 -05:00
|
|
|
update_config_roms();
|
|
|
|
|
}
|
2009-05-18 13:08:06 -04:00
|
|
|
EXPORT_SYMBOL(fw_core_remove_descriptor);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2010-07-08 16:09:06 +02:00
|
|
|
static int reset_bus(struct fw_card *card, bool short_reset)
|
|
|
|
|
{
|
|
|
|
|
int reg = short_reset ? 5 : 1;
|
|
|
|
|
int bit = short_reset ? PHY_BUS_SHORT_RESET : PHY_BUS_RESET;
|
|
|
|
|
|
2024-06-13 22:14:39 +09:00
|
|
|
trace_bus_reset_initiate(card->index, card->generation, short_reset);
|
2024-05-01 16:32:36 +09:00
|
|
|
|
2010-07-08 16:09:06 +02:00
|
|
|
return card->driver->update_phy_reg(card, reg, 0, bit);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void fw_schedule_bus_reset(struct fw_card *card, bool delayed, bool short_reset)
|
|
|
|
|
{
|
2024-06-13 22:14:39 +09:00
|
|
|
trace_bus_reset_schedule(card->index, card->generation, short_reset);
|
2024-05-01 16:32:36 +09:00
|
|
|
|
2010-07-08 16:09:06 +02:00
|
|
|
/* We don't try hard to sort out requests of long vs. short resets. */
|
|
|
|
|
card->br_short = short_reset;
|
|
|
|
|
|
|
|
|
|
/* Use an arbitrary short delay to combine multiple reset requests. */
|
|
|
|
|
fw_card_get(card);
|
2025-09-15 11:42:31 +09:00
|
|
|
if (!queue_delayed_work(fw_workqueue, &card->br_work, delayed ? msecs_to_jiffies(10) : 0))
|
2010-07-08 16:09:06 +02:00
|
|
|
fw_card_put(card);
|
|
|
|
|
}
|
|
|
|
|
EXPORT_SYMBOL(fw_schedule_bus_reset);
|
|
|
|
|
|
|
|
|
|
static void br_work(struct work_struct *work)
|
|
|
|
|
{
|
2025-06-09 08:38:08 +09:00
|
|
|
struct fw_card *card = from_work(card, work, br_work.work);
|
2010-07-08 16:09:06 +02:00
|
|
|
|
|
|
|
|
/* Delay for 2s after last reset per IEEE 1394 clause 8.2.1. */
|
|
|
|
|
if (card->reset_jiffies != 0 &&
|
2025-09-15 11:42:32 +09:00
|
|
|
time_is_after_jiffies64(card->reset_jiffies + secs_to_jiffies(2))) {
|
2024-06-13 22:14:39 +09:00
|
|
|
trace_bus_reset_postpone(card->index, card->generation, card->br_short);
|
2024-05-01 16:32:36 +09:00
|
|
|
|
2025-09-15 11:42:31 +09:00
|
|
|
if (!queue_delayed_work(fw_workqueue, &card->br_work, secs_to_jiffies(2)))
|
2010-07-08 16:09:06 +02:00
|
|
|
fw_card_put(card);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
fw_send_phy_config(card, FW_PHY_CONFIG_NO_NODE_ID, card->generation,
|
|
|
|
|
FW_PHY_CONFIG_CURRENT_GAP_COUNT);
|
|
|
|
|
reset_bus(card, card->br_short);
|
|
|
|
|
fw_card_put(card);
|
|
|
|
|
}
|
|
|
|
|
|
2009-03-10 21:08:37 +01:00
|
|
|
static void allocate_broadcast_channel(struct fw_card *card, int generation)
|
2009-02-23 15:59:34 -05:00
|
|
|
{
|
2009-03-10 21:08:37 +01:00
|
|
|
int channel, bandwidth = 0;
|
|
|
|
|
|
2010-06-10 08:40:49 +02:00
|
|
|
if (!card->broadcast_channel_allocated) {
|
|
|
|
|
fw_iso_resource_manage(card, generation, 1ULL << 31,
|
2011-04-22 15:13:54 +02:00
|
|
|
&channel, &bandwidth, true);
|
2010-06-10 08:40:49 +02:00
|
|
|
if (channel != 31) {
|
firewire: core: prefix log messages with card name
Associate all log messages from firewire-core with the respective card
because some people have more than one card. E.g.
firewire_ohci 0000:04:00.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
firewire_ohci 0000:05:00.0: added OHCI v1.10 device as card 1, 8 IR + 8 IT contexts, quirks 0x0
firewire_core: created device fw0: GUID 0814438400000389, S800
firewire_core: phy config: new root=ffc1, gap_count=5
firewire_core: created device fw1: GUID 0814438400000388, S800
firewire_core: created device fw2: GUID 0001d202e06800d1, S800
turns into
firewire_ohci 0000:04:00.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
firewire_ohci 0000:05:00.0: added OHCI v1.10 device as card 1, 8 IR + 8 IT contexts, quirks 0x0
firewire_core 0000:04:00.0: created device fw0: GUID 0814438400000389, S800
firewire_core 0000:04:00.0: phy config: new root=ffc1, gap_count=5
firewire_core 0000:05:00.0: created device fw1: GUID 0814438400000388, S800
firewire_core 0000:04:00.0: created device fw2: GUID 0001d202e06800d1, S800
This increases the module size slightly; to keep this in check, turn the
former printk wrapper macros into functions. Their implementation is
largely copied from driver core's dev_printk counterparts.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
2012-02-18 22:03:14 +01:00
|
|
|
fw_notice(card, "failed to allocate broadcast channel\n");
|
2010-06-10 08:40:49 +02:00
|
|
|
return;
|
|
|
|
|
}
|
2009-03-10 21:09:28 +01:00
|
|
|
card->broadcast_channel_allocated = true;
|
2009-02-23 15:59:34 -05:00
|
|
|
}
|
2010-06-10 08:40:49 +02:00
|
|
|
|
|
|
|
|
device_for_each_child(card->device, (void *)(long)generation,
|
|
|
|
|
fw_device_set_broadcast_channel);
|
2009-02-23 15:59:34 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
void fw_schedule_bm_work(struct fw_card *card, unsigned long delay)
|
2008-11-29 17:44:57 +01:00
|
|
|
{
|
|
|
|
|
fw_card_get(card);
|
2010-07-08 16:09:06 +02:00
|
|
|
if (!schedule_delayed_work(&card->bm_work, delay))
|
2008-11-29 17:44:57 +01:00
|
|
|
fw_card_put(card);
|
|
|
|
|
}
|
|
|
|
|
|
2025-09-19 08:54:46 +09:00
|
|
|
enum bm_contention_outcome {
|
|
|
|
|
// The bus management contention window is not expired.
|
|
|
|
|
BM_CONTENTION_OUTCOME_WITHIN_WINDOW = 0,
|
|
|
|
|
// The IRM node has link off.
|
|
|
|
|
BM_CONTENTION_OUTCOME_IRM_HAS_LINK_OFF,
|
|
|
|
|
// The IRM node complies IEEE 1394:1994 only.
|
|
|
|
|
BM_CONTENTION_OUTCOME_IRM_COMPLIES_1394_1995_ONLY,
|
|
|
|
|
// Another bus reset, BM work has been rescheduled.
|
|
|
|
|
BM_CONTENTION_OUTCOME_AT_NEW_GENERATION,
|
|
|
|
|
// We have been unable to send the lock request to IRM node due to some local problem.
|
|
|
|
|
BM_CONTENTION_OUTCOME_LOCAL_PROBLEM_AT_TRANSACTION,
|
|
|
|
|
// The lock request failed, maybe the IRM isn't really IRM capable after all.
|
|
|
|
|
BM_CONTENTION_OUTCOME_IRM_IS_NOT_CAPABLE_FOR_IRM,
|
|
|
|
|
// Somebody else is BM.
|
|
|
|
|
BM_CONTENTION_OUTCOME_IRM_HOLDS_ANOTHER_NODE_AS_BM,
|
|
|
|
|
// The local node succeeds after contending for bus manager.
|
|
|
|
|
BM_CONTENTION_OUTCOME_IRM_HOLDS_LOCAL_NODE_AS_BM,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static enum bm_contention_outcome contend_for_bm(struct fw_card *card)
|
2025-09-24 22:18:22 +09:00
|
|
|
__must_hold(&card->lock)
|
2025-09-19 08:54:46 +09:00
|
|
|
{
|
|
|
|
|
int generation = card->generation;
|
|
|
|
|
int local_id = card->local_node->node_id;
|
|
|
|
|
__be32 data[2] = {
|
|
|
|
|
cpu_to_be32(BUS_MANAGER_ID_NOT_REGISTERED),
|
|
|
|
|
cpu_to_be32(local_id),
|
|
|
|
|
};
|
|
|
|
|
bool grace = time_is_before_jiffies64(card->reset_jiffies + msecs_to_jiffies(125));
|
|
|
|
|
bool irm_is_1394_1995_only = false;
|
|
|
|
|
bool keep_this_irm = false;
|
|
|
|
|
struct fw_node *irm_node;
|
|
|
|
|
struct fw_device *irm_device;
|
2025-09-24 22:18:22 +09:00
|
|
|
int irm_node_id;
|
2025-09-19 08:54:46 +09:00
|
|
|
int rcode;
|
|
|
|
|
|
2025-09-24 22:18:22 +09:00
|
|
|
lockdep_assert_held(&card->lock);
|
|
|
|
|
|
2025-09-19 08:54:46 +09:00
|
|
|
if (!grace) {
|
|
|
|
|
if (!is_next_generation(generation, card->bm_generation) || card->bm_abdicate)
|
|
|
|
|
return BM_CONTENTION_OUTCOME_WITHIN_WINDOW;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
irm_node = card->irm_node;
|
|
|
|
|
if (!irm_node->link_on) {
|
|
|
|
|
fw_notice(card, "IRM has link off, making local node (%02x) root\n", local_id);
|
|
|
|
|
return BM_CONTENTION_OUTCOME_IRM_HAS_LINK_OFF;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
irm_device = fw_node_get_device(irm_node);
|
|
|
|
|
if (irm_device && irm_device->config_rom) {
|
|
|
|
|
irm_is_1394_1995_only = (irm_device->config_rom[2] & 0x000000f0) == 0;
|
|
|
|
|
|
|
|
|
|
// Canon MV5i works unreliably if it is not root node.
|
|
|
|
|
keep_this_irm = irm_device->config_rom[3] >> 8 == CANON_OUI;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (irm_is_1394_1995_only && !keep_this_irm) {
|
|
|
|
|
fw_notice(card, "IRM is not 1394a compliant, making local node (%02x) root\n",
|
|
|
|
|
local_id);
|
|
|
|
|
return BM_CONTENTION_OUTCOME_IRM_COMPLIES_1394_1995_ONLY;
|
|
|
|
|
}
|
|
|
|
|
|
2025-09-24 22:18:22 +09:00
|
|
|
irm_node_id = irm_node->node_id;
|
|
|
|
|
|
|
|
|
|
spin_unlock_irq(&card->lock);
|
|
|
|
|
|
|
|
|
|
rcode = fw_run_transaction(card, TCODE_LOCK_COMPARE_SWAP, irm_node_id, generation,
|
2025-09-19 08:54:46 +09:00
|
|
|
SCODE_100, CSR_REGISTER_BASE + CSR_BUS_MANAGER_ID, data,
|
|
|
|
|
sizeof(data));
|
|
|
|
|
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_lock_irq(&card->lock);
|
|
|
|
|
|
2025-09-19 08:54:46 +09:00
|
|
|
switch (rcode) {
|
|
|
|
|
case RCODE_GENERATION:
|
|
|
|
|
return BM_CONTENTION_OUTCOME_AT_NEW_GENERATION;
|
|
|
|
|
case RCODE_SEND_ERROR:
|
|
|
|
|
return BM_CONTENTION_OUTCOME_LOCAL_PROBLEM_AT_TRANSACTION;
|
|
|
|
|
case RCODE_COMPLETE:
|
|
|
|
|
{
|
|
|
|
|
int bm_id = be32_to_cpu(data[0]);
|
|
|
|
|
|
|
|
|
|
// Used by cdev layer for "struct fw_cdev_event_bus_reset".
|
2025-09-24 22:18:22 +09:00
|
|
|
if (bm_id != BUS_MANAGER_ID_NOT_REGISTERED)
|
|
|
|
|
card->bm_node_id = 0xffc0 & bm_id;
|
|
|
|
|
else
|
|
|
|
|
card->bm_node_id = local_id;
|
2025-09-19 08:54:46 +09:00
|
|
|
|
|
|
|
|
if (bm_id != BUS_MANAGER_ID_NOT_REGISTERED)
|
|
|
|
|
return BM_CONTENTION_OUTCOME_IRM_HOLDS_ANOTHER_NODE_AS_BM;
|
|
|
|
|
else
|
|
|
|
|
return BM_CONTENTION_OUTCOME_IRM_HOLDS_LOCAL_NODE_AS_BM;
|
|
|
|
|
}
|
|
|
|
|
default:
|
|
|
|
|
if (!keep_this_irm) {
|
|
|
|
|
fw_notice(card, "BM lock failed (%s), making local node (%02x) root\n",
|
|
|
|
|
fw_rcode_string(rcode), local_id);
|
|
|
|
|
return BM_CONTENTION_OUTCOME_IRM_COMPLIES_1394_1995_ONLY;
|
|
|
|
|
} else {
|
|
|
|
|
return BM_CONTENTION_OUTCOME_IRM_IS_NOT_CAPABLE_FOR_IRM;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2025-09-08 10:21:02 +09:00
|
|
|
DEFINE_FREE(node_unref, struct fw_node *, if (_T) fw_node_put(_T))
|
|
|
|
|
DEFINE_FREE(card_unref, struct fw_card *, if (_T) fw_card_put(_T))
|
|
|
|
|
|
2010-07-08 16:09:06 +02:00
|
|
|
static void bm_work(struct work_struct *work)
|
2006-12-19 19:58:31 -05:00
|
|
|
{
|
2025-06-17 09:43:20 +09:00
|
|
|
static const char gap_count_table[] = {
|
|
|
|
|
63, 5, 7, 8, 10, 13, 16, 18, 21, 24, 26, 29, 32, 35, 37, 40
|
|
|
|
|
};
|
2025-09-08 10:21:02 +09:00
|
|
|
struct fw_card *card __free(card_unref) = from_work(card, work, bm_work.work);
|
|
|
|
|
struct fw_node *root_node __free(node_unref) = NULL;
|
2025-09-08 10:21:04 +09:00
|
|
|
int root_id, new_root_id, irm_id, local_id;
|
2025-09-19 08:54:45 +09:00
|
|
|
int expected_gap_count, generation;
|
2025-09-19 08:54:47 +09:00
|
|
|
bool stand_for_root = false;
|
2006-12-19 19:58:31 -05:00
|
|
|
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_lock_irq(&card->lock);
|
|
|
|
|
|
|
|
|
|
if (card->local_node == NULL) {
|
|
|
|
|
spin_unlock_irq(&card->lock);
|
2025-09-08 10:21:02 +09:00
|
|
|
return;
|
2025-09-24 22:18:22 +09:00
|
|
|
}
|
2006-12-19 19:58:31 -05:00
|
|
|
|
|
|
|
|
generation = card->generation;
|
2010-05-30 19:43:52 +02:00
|
|
|
|
2025-09-08 10:21:00 +09:00
|
|
|
root_node = fw_node_get(card->root_node);
|
2010-05-30 19:43:52 +02:00
|
|
|
|
2009-03-10 21:08:37 +01:00
|
|
|
root_id = root_node->node_id;
|
|
|
|
|
irm_id = card->irm_node->node_id;
|
|
|
|
|
local_id = card->local_node->node_id;
|
2009-03-10 21:07:46 +01:00
|
|
|
|
2025-09-19 08:54:45 +09:00
|
|
|
if (card->bm_generation != generation) {
|
2025-09-19 08:54:46 +09:00
|
|
|
enum bm_contention_outcome result = contend_for_bm(card);
|
2007-01-26 00:38:45 -05:00
|
|
|
|
2025-09-19 08:54:46 +09:00
|
|
|
switch (result) {
|
|
|
|
|
case BM_CONTENTION_OUTCOME_WITHIN_WINDOW:
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_unlock_irq(&card->lock);
|
2025-09-19 08:54:46 +09:00
|
|
|
fw_schedule_bm_work(card, msecs_to_jiffies(125));
|
|
|
|
|
return;
|
|
|
|
|
case BM_CONTENTION_OUTCOME_IRM_HAS_LINK_OFF:
|
2025-09-19 08:54:47 +09:00
|
|
|
stand_for_root = true;
|
|
|
|
|
break;
|
2025-09-19 08:54:46 +09:00
|
|
|
case BM_CONTENTION_OUTCOME_IRM_COMPLIES_1394_1995_ONLY:
|
2025-09-19 08:54:47 +09:00
|
|
|
stand_for_root = true;
|
|
|
|
|
break;
|
2025-09-19 08:54:46 +09:00
|
|
|
case BM_CONTENTION_OUTCOME_AT_NEW_GENERATION:
|
|
|
|
|
// BM work has been rescheduled.
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_unlock_irq(&card->lock);
|
2025-09-19 08:54:46 +09:00
|
|
|
return;
|
|
|
|
|
case BM_CONTENTION_OUTCOME_LOCAL_PROBLEM_AT_TRANSACTION:
|
|
|
|
|
// Let's try again later and hope that the local problem has gone away by
|
|
|
|
|
// then.
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_unlock_irq(&card->lock);
|
2025-09-19 08:54:45 +09:00
|
|
|
fw_schedule_bm_work(card, msecs_to_jiffies(125));
|
|
|
|
|
return;
|
2025-09-19 08:54:46 +09:00
|
|
|
case BM_CONTENTION_OUTCOME_IRM_IS_NOT_CAPABLE_FOR_IRM:
|
|
|
|
|
// Let's do a bus reset and pick the local node as root, and thus, IRM.
|
2025-09-19 08:54:47 +09:00
|
|
|
stand_for_root = true;
|
|
|
|
|
break;
|
2025-09-19 08:54:46 +09:00
|
|
|
case BM_CONTENTION_OUTCOME_IRM_HOLDS_ANOTHER_NODE_AS_BM:
|
|
|
|
|
if (local_id == irm_id) {
|
|
|
|
|
// Only acts as IRM.
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_unlock_irq(&card->lock);
|
2025-09-19 08:54:46 +09:00
|
|
|
allocate_broadcast_channel(card, generation);
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_lock_irq(&card->lock);
|
2025-09-19 08:54:46 +09:00
|
|
|
}
|
|
|
|
|
fallthrough;
|
|
|
|
|
case BM_CONTENTION_OUTCOME_IRM_HOLDS_LOCAL_NODE_AS_BM:
|
|
|
|
|
default:
|
|
|
|
|
card->bm_generation = generation;
|
|
|
|
|
break;
|
2007-01-26 00:38:45 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2025-09-19 08:54:47 +09:00
|
|
|
// We're bus manager for this generation, so next step is to make sure we have an active
|
|
|
|
|
// cycle master and do gap count optimization.
|
|
|
|
|
if (!stand_for_root) {
|
|
|
|
|
if (card->gap_count == GAP_COUNT_MISMATCHED) {
|
|
|
|
|
// If self IDs have inconsistent gap counts, do a
|
|
|
|
|
// bus reset ASAP. The config rom read might never
|
|
|
|
|
// complete, so don't wait for it. However, still
|
|
|
|
|
// send a PHY configuration packet prior to the
|
|
|
|
|
// bus reset. The PHY configuration packet might
|
|
|
|
|
// fail, but 1394-2008 8.4.5.2 explicitly permits
|
|
|
|
|
// it in this case, so it should be safe to try.
|
|
|
|
|
stand_for_root = true;
|
|
|
|
|
|
|
|
|
|
// We must always send a bus reset if the gap count
|
|
|
|
|
// is inconsistent, so bypass the 5-reset limit.
|
|
|
|
|
card->bm_retries = 0;
|
2025-09-08 10:21:07 +09:00
|
|
|
} else {
|
2025-09-19 08:54:47 +09:00
|
|
|
// Now investigate root node.
|
|
|
|
|
struct fw_device *root_device = fw_node_get_device(root_node);
|
|
|
|
|
|
|
|
|
|
if (root_device == NULL) {
|
|
|
|
|
// Either link_on is false, or we failed to read the
|
|
|
|
|
// config rom. In either case, pick another root.
|
|
|
|
|
stand_for_root = true;
|
2025-09-08 10:21:07 +09:00
|
|
|
} else {
|
2025-09-19 08:54:47 +09:00
|
|
|
bool root_device_is_running =
|
|
|
|
|
atomic_read(&root_device->state) == FW_DEVICE_RUNNING;
|
|
|
|
|
|
|
|
|
|
if (!root_device_is_running) {
|
|
|
|
|
// If we haven't probed this device yet, bail out now
|
|
|
|
|
// and let's try again once that's done.
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_unlock_irq(&card->lock);
|
2025-09-19 08:54:47 +09:00
|
|
|
return;
|
|
|
|
|
} else if (!root_device->cmc) {
|
|
|
|
|
// Current root has an active link layer and we
|
|
|
|
|
// successfully read the config rom, but it's not
|
|
|
|
|
// cycle master capable.
|
|
|
|
|
stand_for_root = true;
|
|
|
|
|
}
|
2025-09-08 10:21:07 +09:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2025-09-19 08:54:47 +09:00
|
|
|
|
|
|
|
|
if (stand_for_root) {
|
|
|
|
|
new_root_id = local_id;
|
|
|
|
|
} else {
|
|
|
|
|
// We will send out a force root packet for this node as part of the gap count
|
|
|
|
|
// optimization on behalf of the node.
|
|
|
|
|
new_root_id = root_id;
|
|
|
|
|
}
|
|
|
|
|
|
2007-06-18 19:44:12 +02:00
|
|
|
/*
|
|
|
|
|
* Pick a gap count from 1394a table E-1. The table doesn't cover
|
|
|
|
|
* the typically much larger 1394b beta repeater delays though.
|
|
|
|
|
*/
|
|
|
|
|
if (!card->beta_repeaters_present &&
|
2008-02-24 18:57:23 +01:00
|
|
|
root_node->max_hops < ARRAY_SIZE(gap_count_table))
|
2025-09-08 10:21:05 +09:00
|
|
|
expected_gap_count = gap_count_table[root_node->max_hops];
|
2007-01-26 00:37:50 -05:00
|
|
|
else
|
2025-09-08 10:21:05 +09:00
|
|
|
expected_gap_count = 63;
|
2007-01-26 00:37:50 -05:00
|
|
|
|
2025-09-19 08:54:48 +09:00
|
|
|
// Finally, figure out if we should do a reset or not. If we have done less than 5 resets
|
|
|
|
|
// with the same physical topology and we have either a new root or a new gap count
|
|
|
|
|
// setting, let's do it.
|
|
|
|
|
if (card->bm_retries++ < 5 && (card->gap_count != expected_gap_count || new_root_id != root_id)) {
|
2025-09-08 10:21:05 +09:00
|
|
|
int card_gap_count = card->gap_count;
|
|
|
|
|
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_unlock_irq(&card->lock);
|
|
|
|
|
|
firewire: core: prefix log messages with card name
Associate all log messages from firewire-core with the respective card
because some people have more than one card. E.g.
firewire_ohci 0000:04:00.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
firewire_ohci 0000:05:00.0: added OHCI v1.10 device as card 1, 8 IR + 8 IT contexts, quirks 0x0
firewire_core: created device fw0: GUID 0814438400000389, S800
firewire_core: phy config: new root=ffc1, gap_count=5
firewire_core: created device fw1: GUID 0814438400000388, S800
firewire_core: created device fw2: GUID 0001d202e06800d1, S800
turns into
firewire_ohci 0000:04:00.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
firewire_ohci 0000:05:00.0: added OHCI v1.10 device as card 1, 8 IR + 8 IT contexts, quirks 0x0
firewire_core 0000:04:00.0: created device fw0: GUID 0814438400000389, S800
firewire_core 0000:04:00.0: phy config: new root=ffc1, gap_count=5
firewire_core 0000:05:00.0: created device fw1: GUID 0814438400000388, S800
firewire_core 0000:04:00.0: created device fw2: GUID 0001d202e06800d1, S800
This increases the module size slightly; to keep this in check, turn the
former printk wrapper macros into functions. Their implementation is
largely copied from driver core's dev_printk counterparts.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
2012-02-18 22:03:14 +01:00
|
|
|
fw_notice(card, "phy config: new root=%x, gap_count=%d\n",
|
2025-09-08 10:21:05 +09:00
|
|
|
new_root_id, expected_gap_count);
|
|
|
|
|
fw_send_phy_config(card, new_root_id, generation, expected_gap_count);
|
firewire: core: use long bus reset on gap count error
When resetting the bus after a gap count error, use a long rather than
short bus reset.
IEEE 1394-1995 uses only long bus resets. IEEE 1394a adds the option of
short bus resets. When video or audio transmission is in progress and a
device is hot-plugged elsewhere on the bus, the resulting bus reset can
cause video frame drops or audio dropouts. Short bus resets reduce or
eliminate this problem. Accordingly, short bus resets are almost always
preferred.
However, on a mixed 1394/1394a bus, a short bus reset can trigger an
immediate additional bus reset. This double bus reset can be interpreted
differently by different nodes on the bus, resulting in an inconsistent gap
count after the bus reset. An inconsistent gap count will cause another bus
reset, leading to a neverending bus reset loop. This only happens for some
bus topologies, not for all mixed 1394/1394a buses.
By instead sending a long bus reset after a gap count inconsistency, we
avoid the doubled bus reset, restoring the bus to normal operation.
Signed-off-by: Adam Goldman <adamg@pobox.com>
Link: https://sourceforge.net/p/linux1394/mailman/message/58741624/
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
2024-02-29 22:17:37 +09:00
|
|
|
/*
|
|
|
|
|
* Where possible, use a short bus reset to minimize
|
|
|
|
|
* disruption to isochronous transfers. But in the event
|
|
|
|
|
* of a gap count inconsistency, use a long bus reset.
|
|
|
|
|
*
|
|
|
|
|
* As noted in 1394a 8.4.6.2, nodes on a mixed 1394/1394a bus
|
|
|
|
|
* may set different gap counts after a bus reset. On a mixed
|
|
|
|
|
* 1394/1394a bus, a short bus reset can get doubled. Some
|
|
|
|
|
* nodes may treat the double reset as one bus reset and others
|
|
|
|
|
* may treat it as two, causing a gap count inconsistency
|
|
|
|
|
* again. Using a long bus reset prevents this.
|
|
|
|
|
*/
|
2025-09-08 10:21:05 +09:00
|
|
|
reset_bus(card, card_gap_count != 0);
|
2009-03-10 21:08:37 +01:00
|
|
|
/* Will allocate broadcast channel after the reset. */
|
2025-09-08 10:21:05 +09:00
|
|
|
} else {
|
2025-09-08 10:21:08 +09:00
|
|
|
struct fw_device *root_device = fw_node_get_device(root_node);
|
|
|
|
|
|
2025-09-24 22:18:22 +09:00
|
|
|
spin_unlock_irq(&card->lock);
|
|
|
|
|
|
2025-09-08 10:21:08 +09:00
|
|
|
if (root_device && root_device->cmc) {
|
2025-09-08 10:21:05 +09:00
|
|
|
// Make sure that the cycle master sends cycle start packets.
|
|
|
|
|
__be32 data = cpu_to_be32(CSR_STATE_BIT_CMSTR);
|
|
|
|
|
int rcode = fw_run_transaction(card, TCODE_WRITE_QUADLET_REQUEST,
|
|
|
|
|
root_id, generation, SCODE_100,
|
|
|
|
|
CSR_REGISTER_BASE + CSR_STATE_SET,
|
|
|
|
|
&data, sizeof(data));
|
|
|
|
|
if (rcode == RCODE_GENERATION)
|
|
|
|
|
return;
|
|
|
|
|
}
|
2009-02-23 15:59:34 -05:00
|
|
|
|
2025-09-08 10:21:05 +09:00
|
|
|
if (local_id == irm_id)
|
|
|
|
|
allocate_broadcast_channel(card, generation);
|
|
|
|
|
}
|
2006-12-19 19:58:31 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
void fw_card_initialize(struct fw_card *card,
|
|
|
|
|
const struct fw_card_driver *driver,
|
|
|
|
|
struct device *device)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
2007-02-06 14:49:38 -05:00
|
|
|
static atomic_t index = ATOMIC_INIT(-1);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2007-02-06 14:49:38 -05:00
|
|
|
card->index = atomic_inc_return(&index);
|
2007-01-21 20:44:09 +01:00
|
|
|
card->driver = driver;
|
2006-12-19 19:58:27 -05:00
|
|
|
card->device = device;
|
2025-09-16 08:47:45 +09:00
|
|
|
|
|
|
|
|
card->transactions.current_tlabel = 0;
|
|
|
|
|
card->transactions.tlabel_mask = 0;
|
|
|
|
|
INIT_LIST_HEAD(&card->transactions.list);
|
|
|
|
|
spin_lock_init(&card->transactions.lock);
|
2025-11-14 23:18:50 +09:00
|
|
|
|
|
|
|
|
spin_lock_init(&card->topology_map.lock);
|
2025-09-16 08:47:45 +09:00
|
|
|
|
2025-09-16 08:47:46 +09:00
|
|
|
card->split_timeout.hi = DEFAULT_SPLIT_TIMEOUT / 8000;
|
|
|
|
|
card->split_timeout.lo = (DEFAULT_SPLIT_TIMEOUT % 8000) << 19;
|
|
|
|
|
card->split_timeout.cycles = DEFAULT_SPLIT_TIMEOUT;
|
|
|
|
|
card->split_timeout.jiffies = isoc_cycles_to_jiffies(DEFAULT_SPLIT_TIMEOUT);
|
|
|
|
|
spin_lock_init(&card->split_timeout.lock);
|
|
|
|
|
|
2006-12-19 19:58:27 -05:00
|
|
|
card->color = 0;
|
2008-05-24 16:41:09 +02:00
|
|
|
card->broadcast_channel = BROADCAST_CHANNEL_INITIAL;
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2008-05-24 16:50:22 +02:00
|
|
|
kref_init(&card->kref);
|
|
|
|
|
init_completion(&card->done);
|
2025-09-16 08:47:45 +09:00
|
|
|
|
2006-12-19 19:58:27 -05:00
|
|
|
spin_lock_init(&card->lock);
|
|
|
|
|
|
|
|
|
|
card->local_node = NULL;
|
|
|
|
|
|
2010-07-08 16:09:06 +02:00
|
|
|
INIT_DELAYED_WORK(&card->br_work, br_work);
|
|
|
|
|
INIT_DELAYED_WORK(&card->bm_work, bm_work);
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
EXPORT_SYMBOL(fw_card_initialize);
|
|
|
|
|
|
2025-09-08 10:20:59 +09:00
|
|
|
DEFINE_FREE(workqueue_destroy, struct workqueue_struct *, if (_T) destroy_workqueue(_T))
|
|
|
|
|
|
2024-09-04 21:51:50 +09:00
|
|
|
int fw_card_add(struct fw_card *card, u32 max_receive, u32 link_speed, u64 guid,
|
|
|
|
|
unsigned int supported_isoc_contexts)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
2025-09-08 10:20:59 +09:00
|
|
|
struct workqueue_struct *isoc_wq __free(workqueue_destroy) = NULL;
|
|
|
|
|
struct workqueue_struct *async_wq __free(workqueue_destroy) = NULL;
|
2009-02-03 17:55:19 +01:00
|
|
|
int ret;
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2024-09-04 21:51:50 +09:00
|
|
|
// This workqueue should be:
|
|
|
|
|
// * != WQ_BH Sleepable.
|
|
|
|
|
// * == WQ_UNBOUND Any core can process data for isoc context. The
|
|
|
|
|
// implementation of unit protocol could consumes the core
|
|
|
|
|
// longer somehow.
|
|
|
|
|
// * != WQ_MEM_RECLAIM Not used for any backend of block device.
|
|
|
|
|
// * == WQ_FREEZABLE Isochronous communication is at regular interval in real
|
|
|
|
|
// time, thus should be drained if possible at freeze phase.
|
|
|
|
|
// * == WQ_HIGHPRI High priority to process semi-realtime timestamped data.
|
|
|
|
|
// * == WQ_SYSFS Parameters are available via sysfs.
|
|
|
|
|
// * max_active == n_it + n_ir A hardIRQ could notify events for multiple isochronous
|
|
|
|
|
// contexts if they are scheduled to the same cycle.
|
2025-09-08 10:20:59 +09:00
|
|
|
isoc_wq = alloc_workqueue("firewire-isoc-card%u",
|
|
|
|
|
WQ_UNBOUND | WQ_FREEZABLE | WQ_HIGHPRI | WQ_SYSFS,
|
|
|
|
|
supported_isoc_contexts, card->index);
|
|
|
|
|
if (!isoc_wq)
|
2024-09-04 21:51:50 +09:00
|
|
|
return -ENOMEM;
|
|
|
|
|
|
firewire: core: allocate workqueue for AR/AT request/response contexts
Some tasklets (softIRQs) are still used as bottom-halves to handle
events for 1394 OHCI AR/AT contexts. However, using softIRQs for IRQ
bottom halves is generally discouraged today.
This commit adds a per-fw_card workqueue to accommodate the behaviour
specified by the 1394 OHCI specification.
According to the 1394 OHCI specification, system memory pages are
reserved for each asynchronous DMA context. This allows concurrent
operation across contexts. In the 1394 OHCI PCI driver implementation,
the hardware generates IRQs either upon receiving asynchronous packets
from other nodes (incoming) or after completing transmission to them
(outgoing). These independent events can occur in the same transmission
cycle, therefore the max_active parameter for the workqueue is set to the
total number of AR/AT contexts (=4). The WQ_UNBOUND flag is used to
allow the work to be scheduled on any available core, since there is
little CPU cache affinity benefit for the data.
Each DMA context uses a circular descriptor list in system memory,
allowing deferred data processing in software as long as buffer overrun
are avoided. Since the overall operation is sleepable except for small
atomic regions, WQ_BH is not used. As the descriptors contain
timestamps, WQ_HIGHPRI is specified to support semi-real-time
processing.
The asynchronous context is also used by the SCSI over IEEE 1394
protocol implementation (sbp2), which can be part of memory reclaim paths.
Therefore, WQ_MEM_RECLAIM is required.
To allow uses to adjust CPU affinity according to workload, WQ_SYSFS is
specified so that workqueue attributes are exposed to user space.
Link: https://lore.kernel.org/r/20250615133253.433057-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
2025-06-15 22:32:51 +09:00
|
|
|
// This workqueue should be:
|
|
|
|
|
// * != WQ_BH Sleepable.
|
|
|
|
|
// * == WQ_UNBOUND Any core can process data for asynchronous context.
|
|
|
|
|
// * == WQ_MEM_RECLAIM Used for any backend of block device.
|
|
|
|
|
// * == WQ_FREEZABLE The target device would not be available when being freezed.
|
|
|
|
|
// * == WQ_HIGHPRI High priority to process semi-realtime timestamped data.
|
|
|
|
|
// * == WQ_SYSFS Parameters are available via sysfs.
|
|
|
|
|
// * max_active == 4 A hardIRQ could notify events for a pair of requests and
|
|
|
|
|
// response AR/AT contexts.
|
2025-09-08 10:20:59 +09:00
|
|
|
async_wq = alloc_workqueue("firewire-async-card%u",
|
|
|
|
|
WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_HIGHPRI | WQ_SYSFS,
|
|
|
|
|
4, card->index);
|
|
|
|
|
if (!async_wq)
|
|
|
|
|
return -ENOMEM;
|
firewire: core: allocate workqueue for AR/AT request/response contexts
Some tasklets (softIRQs) are still used as bottom-halves to handle
events for 1394 OHCI AR/AT contexts. However, using softIRQs for IRQ
bottom halves is generally discouraged today.
This commit adds a per-fw_card workqueue to accommodate the behaviour
specified by the 1394 OHCI specification.
According to the 1394 OHCI specification, system memory pages are
reserved for each asynchronous DMA context. This allows concurrent
operation across contexts. In the 1394 OHCI PCI driver implementation,
the hardware generates IRQs either upon receiving asynchronous packets
from other nodes (incoming) or after completing transmission to them
(outgoing). These independent events can occur in the same transmission
cycle, therefore the max_active parameter for the workqueue is set to the
total number of AR/AT contexts (=4). The WQ_UNBOUND flag is used to
allow the work to be scheduled on any available core, since there is
little CPU cache affinity benefit for the data.
Each DMA context uses a circular descriptor list in system memory,
allowing deferred data processing in software as long as buffer overrun
are avoided. Since the overall operation is sleepable except for small
atomic regions, WQ_BH is not used. As the descriptors contain
timestamps, WQ_HIGHPRI is specified to support semi-real-time
processing.
The asynchronous context is also used by the SCSI over IEEE 1394
protocol implementation (sbp2), which can be part of memory reclaim paths.
Therefore, WQ_MEM_RECLAIM is required.
To allow uses to adjust CPU affinity according to workload, WQ_SYSFS is
specified so that workqueue attributes are exposed to user space.
Link: https://lore.kernel.org/r/20250615133253.433057-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
2025-06-15 22:32:51 +09:00
|
|
|
|
2025-09-08 10:20:59 +09:00
|
|
|
card->isoc_wq = isoc_wq;
|
|
|
|
|
card->async_wq = async_wq;
|
2006-12-19 19:58:27 -05:00
|
|
|
card->max_receive = max_receive;
|
|
|
|
|
card->link_speed = link_speed;
|
|
|
|
|
card->guid = guid;
|
|
|
|
|
|
firewire: core: allocate workqueue for AR/AT request/response contexts
Some tasklets (softIRQs) are still used as bottom-halves to handle
events for 1394 OHCI AR/AT contexts. However, using softIRQs for IRQ
bottom halves is generally discouraged today.
This commit adds a per-fw_card workqueue to accommodate the behaviour
specified by the 1394 OHCI specification.
According to the 1394 OHCI specification, system memory pages are
reserved for each asynchronous DMA context. This allows concurrent
operation across contexts. In the 1394 OHCI PCI driver implementation,
the hardware generates IRQs either upon receiving asynchronous packets
from other nodes (incoming) or after completing transmission to them
(outgoing). These independent events can occur in the same transmission
cycle, therefore the max_active parameter for the workqueue is set to the
total number of AR/AT contexts (=4). The WQ_UNBOUND flag is used to
allow the work to be scheduled on any available core, since there is
little CPU cache affinity benefit for the data.
Each DMA context uses a circular descriptor list in system memory,
allowing deferred data processing in software as long as buffer overrun
are avoided. Since the overall operation is sleepable except for small
atomic regions, WQ_BH is not used. As the descriptors contain
timestamps, WQ_HIGHPRI is specified to support semi-real-time
processing.
The asynchronous context is also used by the SCSI over IEEE 1394
protocol implementation (sbp2), which can be part of memory reclaim paths.
Therefore, WQ_MEM_RECLAIM is required.
To allow uses to adjust CPU affinity according to workload, WQ_SYSFS is
specified so that workqueue attributes are exposed to user space.
Link: https://lore.kernel.org/r/20250615133253.433057-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
2025-06-15 22:32:51 +09:00
|
|
|
scoped_guard(mutex, &card_mutex) {
|
|
|
|
|
generate_config_rom(card, tmp_config_rom);
|
|
|
|
|
ret = card->driver->enable(card, tmp_config_rom, config_rom_length);
|
2025-09-08 10:20:59 +09:00
|
|
|
if (ret < 0) {
|
|
|
|
|
card->isoc_wq = NULL;
|
|
|
|
|
card->async_wq = NULL;
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
retain_and_null_ptr(isoc_wq);
|
|
|
|
|
retain_and_null_ptr(async_wq);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
firewire: core: allocate workqueue for AR/AT request/response contexts
Some tasklets (softIRQs) are still used as bottom-halves to handle
events for 1394 OHCI AR/AT contexts. However, using softIRQs for IRQ
bottom halves is generally discouraged today.
This commit adds a per-fw_card workqueue to accommodate the behaviour
specified by the 1394 OHCI specification.
According to the 1394 OHCI specification, system memory pages are
reserved for each asynchronous DMA context. This allows concurrent
operation across contexts. In the 1394 OHCI PCI driver implementation,
the hardware generates IRQs either upon receiving asynchronous packets
from other nodes (incoming) or after completing transmission to them
(outgoing). These independent events can occur in the same transmission
cycle, therefore the max_active parameter for the workqueue is set to the
total number of AR/AT contexts (=4). The WQ_UNBOUND flag is used to
allow the work to be scheduled on any available core, since there is
little CPU cache affinity benefit for the data.
Each DMA context uses a circular descriptor list in system memory,
allowing deferred data processing in software as long as buffer overrun
are avoided. Since the overall operation is sleepable except for small
atomic regions, WQ_BH is not used. As the descriptors contain
timestamps, WQ_HIGHPRI is specified to support semi-real-time
processing.
The asynchronous context is also used by the SCSI over IEEE 1394
protocol implementation (sbp2), which can be part of memory reclaim paths.
Therefore, WQ_MEM_RECLAIM is required.
To allow uses to adjust CPU affinity according to workload, WQ_SYSFS is
specified so that workqueue attributes are exposed to user space.
Link: https://lore.kernel.org/r/20250615133253.433057-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
2025-06-15 22:32:51 +09:00
|
|
|
list_add_tail(&card->link, &card_list);
|
2024-09-04 21:51:50 +09:00
|
|
|
}
|
2009-09-06 18:50:29 +02:00
|
|
|
|
2024-08-05 17:53:53 +09:00
|
|
|
return 0;
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
EXPORT_SYMBOL(fw_card_add);
|
|
|
|
|
|
2007-05-07 20:33:32 -04:00
|
|
|
/*
|
2009-06-16 19:15:25 +02:00
|
|
|
* The next few functions implement a dummy driver that is used once a card
|
|
|
|
|
* driver shuts down an fw_card. This allows the driver to cleanly unload,
|
|
|
|
|
* as all IO to the card will be handled (and failed) by the dummy driver
|
|
|
|
|
* instead of calling into the module. Only functions for iso context
|
|
|
|
|
* shutdown still need to be provided by the card driver.
|
2010-08-01 12:23:14 +02:00
|
|
|
*
|
|
|
|
|
* .read/write_csr() should never be called anymore after the dummy driver
|
|
|
|
|
* was bound since they are only used within request handler context.
|
|
|
|
|
* .set_config_rom() is never called since the card is taken out of card_list
|
|
|
|
|
* before switching to the dummy driver.
|
2007-05-07 20:33:32 -04:00
|
|
|
*/
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2010-07-08 16:09:06 +02:00
|
|
|
static int dummy_read_phy_reg(struct fw_card *card, int address)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
2010-07-08 16:09:06 +02:00
|
|
|
return -ENODEV;
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
static int dummy_update_phy_reg(struct fw_card *card, int address,
|
|
|
|
|
int clear_bits, int set_bits)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
|
|
|
|
return -ENODEV;
|
|
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
static void dummy_send_request(struct fw_card *card, struct fw_packet *packet)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
2010-07-18 12:44:01 +02:00
|
|
|
packet->callback(packet, card, RCODE_CANCELLED);
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
static void dummy_send_response(struct fw_card *card, struct fw_packet *packet)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
2010-07-18 12:44:01 +02:00
|
|
|
packet->callback(packet, card, RCODE_CANCELLED);
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
static int dummy_cancel_packet(struct fw_card *card, struct fw_packet *packet)
|
2007-02-06 14:49:32 -05:00
|
|
|
{
|
|
|
|
|
return -ENOENT;
|
|
|
|
|
}
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
static int dummy_enable_phys_dma(struct fw_card *card,
|
|
|
|
|
int node_id, int generation)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
|
|
|
|
return -ENODEV;
|
|
|
|
|
}
|
|
|
|
|
|
2010-08-01 12:23:14 +02:00
|
|
|
static struct fw_iso_context *dummy_allocate_iso_context(struct fw_card *card,
|
|
|
|
|
int type, int channel, size_t header_size)
|
|
|
|
|
{
|
|
|
|
|
return ERR_PTR(-ENODEV);
|
|
|
|
|
}
|
|
|
|
|
|
firewire: Add dummy read_csr/write_csr functions
(Hector Martin wrote)
This fixes segfaults when a card gets yanked off of the PCIe bus while
busy, e.g. with a userspace app trying to get the cycle time:
[8638860.994310] Call Trace:
[8638860.994313] ioctl_get_cycle_timer2+0x4f/0xd0 [firewire_core]
[8638860.994323] fw_device_op_ioctl+0xae/0x150 [firewire_core]
[8638860.994328] __x64_sys_ioctl+0x7d/0xb0
[8638860.994332] do_syscall_64+0x45/0x80
[8638860.994337] entry_SYSCALL_64_after_hwframe+0x44/0xae
(Takashi Sakamoto wrote)
As long as reading commit 20802224298c ("firewire: core: add forgotten
dummy driver methods, remove unused ones"), three functions are not
implemeted in dummy driver for reason; .read_csr, .write_csr, and
.set_config_rom.
In core of Linux FireWire subsystem, the callback of .set_config_rom is
under acquisition of mutual exclusive for local list of card. The
acquision is also done in process for removal of card, therefore it's
safe for missing implementation of .set_config_rom.
On the other hand, no lock primitive accompanies any call of .read_csr and
.write_csr. For userspace client, check of node shutdown is done in the
beginning of dispatch of ioctl request, while node shifts to shutdown
state in workqueue context enough after card shifts to dummy driver. It's
probable that these two functions are called for the dummy driver by the
code of userspace client. In-kernel unit driver has similar situation.
It's better to add implementation of the two functions for dummy driver.
Signed-off-by: Hector Martin <marcan@marcan.st>
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Link: https://lore.kernel.org/r/20220405072221.226217-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2022-04-05 16:22:19 +09:00
|
|
|
static u32 dummy_read_csr(struct fw_card *card, int csr_offset)
|
|
|
|
|
{
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void dummy_write_csr(struct fw_card *card, int csr_offset, u32 value)
|
|
|
|
|
{
|
|
|
|
|
}
|
|
|
|
|
|
2010-08-01 12:23:14 +02:00
|
|
|
static int dummy_start_iso(struct fw_iso_context *ctx,
|
|
|
|
|
s32 cycle, u32 sync, u32 tags)
|
|
|
|
|
{
|
|
|
|
|
return -ENODEV;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int dummy_set_iso_channels(struct fw_iso_context *ctx, u64 *channels)
|
|
|
|
|
{
|
|
|
|
|
return -ENODEV;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int dummy_queue_iso(struct fw_iso_context *ctx, struct fw_iso_packet *p,
|
|
|
|
|
struct fw_iso_buffer *buffer, unsigned long payload)
|
|
|
|
|
{
|
|
|
|
|
return -ENODEV;
|
|
|
|
|
}
|
|
|
|
|
|
2011-05-02 09:33:56 +02:00
|
|
|
static void dummy_flush_queue_iso(struct fw_iso_context *ctx)
|
|
|
|
|
{
|
|
|
|
|
}
|
|
|
|
|
|
2012-03-18 19:06:39 +01:00
|
|
|
static int dummy_flush_iso_completions(struct fw_iso_context *ctx)
|
|
|
|
|
{
|
|
|
|
|
return -ENODEV;
|
|
|
|
|
}
|
|
|
|
|
|
2009-06-16 19:15:25 +02:00
|
|
|
static const struct fw_card_driver dummy_driver_template = {
|
2010-08-01 12:23:14 +02:00
|
|
|
.read_phy_reg = dummy_read_phy_reg,
|
|
|
|
|
.update_phy_reg = dummy_update_phy_reg,
|
|
|
|
|
.send_request = dummy_send_request,
|
|
|
|
|
.send_response = dummy_send_response,
|
|
|
|
|
.cancel_packet = dummy_cancel_packet,
|
|
|
|
|
.enable_phys_dma = dummy_enable_phys_dma,
|
firewire: Add dummy read_csr/write_csr functions
(Hector Martin wrote)
This fixes segfaults when a card gets yanked off of the PCIe bus while
busy, e.g. with a userspace app trying to get the cycle time:
[8638860.994310] Call Trace:
[8638860.994313] ioctl_get_cycle_timer2+0x4f/0xd0 [firewire_core]
[8638860.994323] fw_device_op_ioctl+0xae/0x150 [firewire_core]
[8638860.994328] __x64_sys_ioctl+0x7d/0xb0
[8638860.994332] do_syscall_64+0x45/0x80
[8638860.994337] entry_SYSCALL_64_after_hwframe+0x44/0xae
(Takashi Sakamoto wrote)
As long as reading commit 20802224298c ("firewire: core: add forgotten
dummy driver methods, remove unused ones"), three functions are not
implemeted in dummy driver for reason; .read_csr, .write_csr, and
.set_config_rom.
In core of Linux FireWire subsystem, the callback of .set_config_rom is
under acquisition of mutual exclusive for local list of card. The
acquision is also done in process for removal of card, therefore it's
safe for missing implementation of .set_config_rom.
On the other hand, no lock primitive accompanies any call of .read_csr and
.write_csr. For userspace client, check of node shutdown is done in the
beginning of dispatch of ioctl request, while node shifts to shutdown
state in workqueue context enough after card shifts to dummy driver. It's
probable that these two functions are called for the dummy driver by the
code of userspace client. In-kernel unit driver has similar situation.
It's better to add implementation of the two functions for dummy driver.
Signed-off-by: Hector Martin <marcan@marcan.st>
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Link: https://lore.kernel.org/r/20220405072221.226217-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2022-04-05 16:22:19 +09:00
|
|
|
.read_csr = dummy_read_csr,
|
|
|
|
|
.write_csr = dummy_write_csr,
|
2010-08-01 12:23:14 +02:00
|
|
|
.allocate_iso_context = dummy_allocate_iso_context,
|
|
|
|
|
.start_iso = dummy_start_iso,
|
|
|
|
|
.set_iso_channels = dummy_set_iso_channels,
|
|
|
|
|
.queue_iso = dummy_queue_iso,
|
2011-05-02 09:33:56 +02:00
|
|
|
.flush_queue_iso = dummy_flush_queue_iso,
|
2012-03-18 19:06:39 +01:00
|
|
|
.flush_iso_completions = dummy_flush_iso_completions,
|
2006-12-19 19:58:27 -05:00
|
|
|
};
|
|
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
void fw_card_release(struct kref *kref)
|
2008-05-24 16:50:22 +02:00
|
|
|
{
|
|
|
|
|
struct fw_card *card = container_of(kref, struct fw_card, kref);
|
|
|
|
|
|
|
|
|
|
complete(&card->done);
|
|
|
|
|
}
|
2012-02-01 22:36:02 +00:00
|
|
|
EXPORT_SYMBOL_GPL(fw_card_release);
|
2008-05-24 16:50:22 +02:00
|
|
|
|
2008-12-14 21:47:04 +01:00
|
|
|
void fw_core_remove_card(struct fw_card *card)
|
2006-12-19 19:58:27 -05:00
|
|
|
{
|
2009-06-16 19:15:25 +02:00
|
|
|
struct fw_card_driver dummy_driver = dummy_driver_template;
|
|
|
|
|
|
2024-09-04 21:51:50 +09:00
|
|
|
might_sleep();
|
|
|
|
|
|
2007-03-23 10:24:02 -06:00
|
|
|
card->driver->update_phy_reg(card, 4,
|
|
|
|
|
PHY_LINK_ACTIVE | PHY_CONTENDER, 0);
|
2010-07-08 16:09:06 +02:00
|
|
|
fw_schedule_bus_reset(card, false, true);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2024-08-05 17:53:53 +09:00
|
|
|
scoped_guard(mutex, &card_mutex)
|
|
|
|
|
list_del_init(&card->link);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2009-06-16 19:15:25 +02:00
|
|
|
/* Switch off most of the card driver interface. */
|
|
|
|
|
dummy_driver.free_iso_context = card->driver->free_iso_context;
|
|
|
|
|
dummy_driver.stop_iso = card->driver->stop_iso;
|
2006-12-19 19:58:27 -05:00
|
|
|
card->driver = &dummy_driver;
|
2024-09-04 21:51:50 +09:00
|
|
|
drain_workqueue(card->isoc_wq);
|
firewire: core: allocate workqueue for AR/AT request/response contexts
Some tasklets (softIRQs) are still used as bottom-halves to handle
events for 1394 OHCI AR/AT contexts. However, using softIRQs for IRQ
bottom halves is generally discouraged today.
This commit adds a per-fw_card workqueue to accommodate the behaviour
specified by the 1394 OHCI specification.
According to the 1394 OHCI specification, system memory pages are
reserved for each asynchronous DMA context. This allows concurrent
operation across contexts. In the 1394 OHCI PCI driver implementation,
the hardware generates IRQs either upon receiving asynchronous packets
from other nodes (incoming) or after completing transmission to them
(outgoing). These independent events can occur in the same transmission
cycle, therefore the max_active parameter for the workqueue is set to the
total number of AR/AT contexts (=4). The WQ_UNBOUND flag is used to
allow the work to be scheduled on any available core, since there is
little CPU cache affinity benefit for the data.
Each DMA context uses a circular descriptor list in system memory,
allowing deferred data processing in software as long as buffer overrun
are avoided. Since the overall operation is sleepable except for small
atomic regions, WQ_BH is not used. As the descriptors contain
timestamps, WQ_HIGHPRI is specified to support semi-real-time
processing.
The asynchronous context is also used by the SCSI over IEEE 1394
protocol implementation (sbp2), which can be part of memory reclaim paths.
Therefore, WQ_MEM_RECLAIM is required.
To allow uses to adjust CPU affinity according to workload, WQ_SYSFS is
specified so that workqueue attributes are exposed to user space.
Link: https://lore.kernel.org/r/20250615133253.433057-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
2025-06-15 22:32:51 +09:00
|
|
|
drain_workqueue(card->async_wq);
|
2006-12-19 19:58:27 -05:00
|
|
|
|
2024-08-05 17:54:05 +09:00
|
|
|
scoped_guard(spinlock_irqsave, &card->lock)
|
|
|
|
|
fw_destroy_nodes(card);
|
2008-05-24 16:50:22 +02:00
|
|
|
|
|
|
|
|
/* Wait for all users, especially device workqueue jobs, to finish. */
|
|
|
|
|
fw_card_put(card);
|
|
|
|
|
wait_for_completion(&card->done);
|
2007-08-21 01:05:14 +02:00
|
|
|
|
2024-09-04 21:51:50 +09:00
|
|
|
destroy_workqueue(card->isoc_wq);
|
firewire: core: allocate workqueue for AR/AT request/response contexts
Some tasklets (softIRQs) are still used as bottom-halves to handle
events for 1394 OHCI AR/AT contexts. However, using softIRQs for IRQ
bottom halves is generally discouraged today.
This commit adds a per-fw_card workqueue to accommodate the behaviour
specified by the 1394 OHCI specification.
According to the 1394 OHCI specification, system memory pages are
reserved for each asynchronous DMA context. This allows concurrent
operation across contexts. In the 1394 OHCI PCI driver implementation,
the hardware generates IRQs either upon receiving asynchronous packets
from other nodes (incoming) or after completing transmission to them
(outgoing). These independent events can occur in the same transmission
cycle, therefore the max_active parameter for the workqueue is set to the
total number of AR/AT contexts (=4). The WQ_UNBOUND flag is used to
allow the work to be scheduled on any available core, since there is
little CPU cache affinity benefit for the data.
Each DMA context uses a circular descriptor list in system memory,
allowing deferred data processing in software as long as buffer overrun
are avoided. Since the overall operation is sleepable except for small
atomic regions, WQ_BH is not used. As the descriptors contain
timestamps, WQ_HIGHPRI is specified to support semi-real-time
processing.
The asynchronous context is also used by the SCSI over IEEE 1394
protocol implementation (sbp2), which can be part of memory reclaim paths.
Therefore, WQ_MEM_RECLAIM is required.
To allow uses to adjust CPU affinity according to workload, WQ_SYSFS is
specified so that workqueue attributes are exposed to user space.
Link: https://lore.kernel.org/r/20250615133253.433057-2-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
2025-06-15 22:32:51 +09:00
|
|
|
destroy_workqueue(card->async_wq);
|
2024-09-04 21:51:50 +09:00
|
|
|
|
2025-09-16 08:47:45 +09:00
|
|
|
WARN_ON(!list_empty(&card->transactions.list));
|
2006-12-19 19:58:27 -05:00
|
|
|
}
|
|
|
|
|
EXPORT_SYMBOL(fw_core_remove_card);
|
2022-04-05 16:22:20 +09:00
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* fw_card_read_cycle_time: read from Isochronous Cycle Timer Register of 1394 OHCI in MMIO region
|
|
|
|
|
* for controller card.
|
|
|
|
|
* @card: The instance of card for 1394 OHCI controller.
|
|
|
|
|
* @cycle_time: The mutual reference to value of cycle time for the read operation.
|
|
|
|
|
*
|
|
|
|
|
* Read value from Isochronous Cycle Timer Register of 1394 OHCI in MMIO region for the given
|
|
|
|
|
* controller card. This function accesses the region without any lock primitives or IRQ mask.
|
|
|
|
|
* When returning successfully, the content of @value argument has value aligned to host endianness,
|
|
|
|
|
* formetted by CYCLE_TIME CSR Register of IEEE 1394 std.
|
|
|
|
|
*
|
|
|
|
|
* Context: Any context.
|
|
|
|
|
* Return:
|
|
|
|
|
* * 0 - Read successfully.
|
|
|
|
|
* * -ENODEV - The controller is unavailable due to being removed or unbound.
|
|
|
|
|
*/
|
|
|
|
|
int fw_card_read_cycle_time(struct fw_card *card, u32 *cycle_time)
|
|
|
|
|
{
|
|
|
|
|
if (card->driver->read_csr == dummy_read_csr)
|
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
|
|
// It's possible to switch to dummy driver between the above and the below. This is the best
|
|
|
|
|
// effort to return -ENODEV.
|
|
|
|
|
*cycle_time = card->driver->read_csr(card, CSR_CYCLE_TIME);
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
EXPORT_SYMBOL_GPL(fw_card_read_cycle_time);
|