|
Nzongani, U., Zylberman, J., Doncecchi, C. E., Perez, A., Debbasch, F., & Arnault, P. (2023). Quantum circuits for discrete-time quantum walks with position-dependent coin operator. Quantum Inf. Process., 22(7), 270–46pp.
Abstract: The aim of this paper is to build quantum circuits that implement discrete-time quantum walks having an arbitrary position-dependent coin operator. The position of the walker is encoded in base 2: with n wires, each corresponding to one qubit, we encode 2(n) position states. The data necessary to define an arbitrary position-dependent coin operator is therefore exponential in n. Hence, the exponentiality will necessarily appear somewhere in our circuits. We first propose a circuit implementing the position-dependent coin operator, that is naive, in the sense that it has exponential depth and implements sequentially all appropriate position-dependent coin operators. We then propose a circuit that “transfers” all the depth into ancillae, yielding a final depth that is linear in n at the cost of an exponential number of ancillae. Themain idea of this linear-depth circuit is to implement in parallel all coin operators at the different positions. Reducing the depth exponentially at the cost of having an exponential number of ancillae is a goal which has already been achieved for the problem of loading classical data on a quantum circuit (Araujo in Sci Rep 11:6329, 2021) (notice that such a circuit can be used to load the initial state of the walker). Here, we achieve this goal for the problem of applying a position-dependent coin operator in a discrete-time quantum walk. Finally, we extend the result of Welch (New J Phys 16:033040, 2014) from position-dependent unitaries which are diagonal in the position basis to position-dependent 2 x 2-block-diagonal unitaries: indeed, we show that for a position dependence of the coin operator (the block-diagonal unitary) which is smooth enough, one can find an efficient quantum-circuit implementation approximating the coin operator up to an error epsilon (in terms of the spectral norm), the depth and size of which scale as O(1/epsilon). A typical application of the efficient implementation would be the quantum simulation of a relativistic spin-1/2 particle on a lattice, coupled to a smooth external gauge field; notice that recently, quantum spatial-search schemes have been developed which use gauge fields as the oracle, to mark the vertex to be found (Zylberman in Entropy 23:1441, 2021), (Fredon arXiv:2210.13920). A typical application of the linear-depth circuit would be when there is spatial noise on the coin operator (and hence a non-smooth dependence in the position).
|
|
|
Natochii, A. et al, & Marinas, C. (2023). Measured and projected beam backgrounds in the Belle II experiment at the SuperKEKB collider. Nucl. Instrum. Methods Phys. Res. A, 1055, 168550–21pp.
Abstract: The Belle II experiment at the SuperKEKB electron-positron collider aims to collect an unprecedented data set of 50 ab-1 to study CP-violation in the B-meson system and to search for Physics beyond the Standard Model. SuperKEKB is already the world's highest-luminosity collider. In order to collect the planned data set within approximately one decade, the target is to reach a peak luminosity of 6 x 1035 cm-2 s-1by further increasing the beam currents and reducing the beam size at the interaction point by squeezing the betatron function down to betay* = 0.3 mm. To ensure detector longevity and maintain good reconstruction performance, beam backgrounds must remain well controlled. We report on current background rates in Belle II and compare these against simulation. We find that a number of recent refinements have significantly improved the background simulation accuracy. Finally, we estimate the safety margins going forward. We predict that backgrounds should remain high but acceptable until a luminosity of at least 2.8 x 1035 cm-2 s-1is reached for betay* = 0.6 mm. At this point, the most vulnerable Belle II detectors, the Time-of-Propagation (TOP) particle identification system and the Central Drift Chamber (CDC), have predicted background hit rates from single-beam and luminosity backgrounds that add up to approximately half of the maximum acceptable rates.
|
|
|
de los Rios, M., Petac, M., Zaldivar, B., Bonaventura, N. R., Calore, F., & Iocco, F. (2023). Determining the dark matter distribution in simulated galaxies with deep learning. Mon. Not. Roy. Astron. Soc., 525(4), 6015–6035.
Abstract: We present a novel method of inferring the dark matter (DM) content and spatial distribution within galaxies, using convolutional neural networks (CNNs) trained within state-of-the-art hydrodynamical simulations (Illustris-TNG100). Within the controlled environment of the simulation, the framework we have developed is capable of inferring the DM mass distribution within galaxies of mass similar to 10(11)-10(13)M(circle dot) from the gravitationally baryon-dominated internal regions to the DM-rich, baryon-depleted outskirts of the galaxies, with a mean absolute error always below approximate to 0.25 when using photometrical and spectroscopic information. With respect to traditional methods, the one presented here also possesses the advantages of not relying on a pre-assigned shape for the DM distribution, to be applicable to galaxies not necessarily in isolation, and to perform very well even in the absence of spectroscopic observations.
|
|
|
Roser, J., Barrientos, L., Bernabeu, J., Borja-Lloret, M., Muñoz, E., Ros, A., et al. (2022). Joint image reconstruction algorithm in Compton cameras. Phys. Med. Biol., 67(15), 155009–15pp.
Abstract: Objective. To demonstrate the benefits of using an joint image reconstruction algorithm based on the List Mode Maximum Likelihood Expectation Maximization that combines events measured in different channels of information of a Compton camera. Approach. Both simulations and experimental data are employed to show the algorithm performance. Main results. The obtained joint images present improved image quality and yield better estimates of displacements of high-energy gamma-ray emitting sources. The algorithm also provides images that are more stable than any individual channel against the noisy convergence that characterizes Maximum Likelihood based algorithms. Significance. The joint reconstruction algorithm can improve the quality and robustness of Compton camera images. It also has high versatility, as it can be easily adapted to any Compton camera geometry. It is thus expected to represent an important step in the optimization of Compton camera imaging.
|
|
|
LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Centrality determination in heavy-ion collisions with the LHCb detector. J. Instrum., 17(5), P05009–31pp.
Abstract: The centrality of heavy-ion collisions is directly related to the created medium in these interactions. A procedure to determine the centrality of collisions with the LHCb detector is implemented for lead-lead collisions root s(NN) = 5 TeV and lead-neon fixed-target collisions at root s(NN) = 69 GeV. The energy deposits in the electromagnetic calorimeter are used to determine and define the centrality classes. The correspondence between the number of participants and the centrality for the lead-lead collisions is in good agreement with the correspondence found in other experiments, and the centrality measurements for the lead-neon collisions presented here are performed for the first time in fixed-target collisions at the LHC.
|
|