Domande di intervista sulla concorrenza Java (+ risposte)

Questo articolo fa parte di una serie: • Domande di intervista sulle raccolte Java

• Domande di intervista sul sistema di tipo Java

• Domande di intervista sulla concorrenza Java (+ risposte) (articolo corrente) • Struttura della classe Java e domande di intervista sull'inizializzazione

• Domande di intervista Java 8 (+ risposte)

• Gestione della memoria in Java Intervista Domande (+ risposte)

• Domande di intervista su Java Generics (+ risposte)

• Domande di intervista sul controllo del flusso Java (+ risposte)

• Domande di intervista sulle eccezioni Java (+ risposte)

• Domande di intervista sulle annotazioni Java (+ risposte)

• Domande principali per l'intervista su Spring Framework

1. Introduzione

La concorrenza in Java è uno degli argomenti più complessi e avanzati sollevati durante i colloqui tecnici. Questo articolo fornisce le risposte ad alcune delle domande dell'intervista sull'argomento che potresti incontrare.

Q1. Qual è la differenza tra un processo e un thread?

Sia i processi che i thread sono unità di concorrenza, ma hanno una differenza fondamentale: i processi non condividono una memoria comune, mentre i thread sì.

Dal punto di vista del sistema operativo, un processo è un pezzo di software indipendente che viene eseguito nel proprio spazio di memoria virtuale. Qualsiasi sistema operativo multitasking (il che significa quasi tutti i sistemi operativi moderni) deve separare i processi in memoria in modo che un processo in errore non trascini tutti gli altri processi rimescolando la memoria comune.

I processi sono quindi solitamente isolati e cooperano tramite la comunicazione interprocesso definita dal sistema operativo come una sorta di API intermedia.

Al contrario, un thread è una parte di un'applicazione che condivide una memoria comune con altri thread della stessa applicazione. L'utilizzo della memoria comune consente di eliminare molti overhead, progettare i thread in modo che cooperino e scambiare dati tra loro molto più velocemente.

Q2. Come puoi creare un'istanza di thread ed eseguirla?

Per creare un'istanza di un thread, hai due opzioni. Per prima cosa, passa un'istanza Runnable al suo costruttore e chiama start () . Runnable è un'interfaccia funzionale, quindi può essere passata come espressione lambda:

Thread thread1 = new Thread(() -> System.out.println("Hello World from Runnable!")); thread1.start();

Thread implementa anche Runnable , quindi un altro modo per avviare un thread è creare una sottoclasse anonima, sovrascrivere il suo metodo run () e quindi chiamare start () :

Thread thread2 = new Thread() { @Override public void run() { System.out.println("Hello World from subclass!"); } }; thread2.start();

Q3. Descrivi i diversi stati di un thread e quando si verificano le transizioni di stato.

Lo stato di un thread può essere verificato utilizzando il metodo Thread.getState () . Diversi stati di un thread sono descritti nell'enumerazione Thread.State . Loro sono:

  • NOVITÀ - una nuovaistanza di Thread che non è stata ancora avviata tramite Thread.start ()
  • RUNNABLE - un thread in esecuzione. Si chiama eseguibile perché in qualsiasi momento potrebbe essere in esecuzione o in attesa del successivo quantum di tempo dallo scheduler del thread. Un NUOVO thread entra nellostato ESEGUIBILE quando si chiama Thread.start () su di esso
  • BLOCCATO - un thread in esecuzione viene bloccato se deve entrare in una sezione sincronizzata ma non può farlo a causa di un altro thread che tiene il monitor di questa sezione
  • WAITING - un thread entra in questo stato se attende che un altro thread esegua una particolare azione. Ad esempio, un thread entra in questo stato chiamando il metodo Object.wait () su un monitor che contiene o ilmetodo Thread.join () su un altro thread
  • TIMED_WAITING - come sopra, ma un thread entra in questo stato dopo aver chiamato le versioni temporizzate di Thread.sleep () , Object.wait () , Thread.join () e alcuni altri metodi
  • TERMINATO - un thread ha completato l'esecuzione del suometodo Runnable.run () ed è terminato

Q4. Qual è la differenza tra le interfacce eseguibili e richiamabili? Come vengono utilizzati?

L' interfaccia Runnable ha un unico metodo di esecuzione . Rappresenta un'unità di calcolo che deve essere eseguita in un thread separato. L' interfaccia Runnable non consente a questo metodo di restituire un valore o di generare eccezioni non controllate.

L' interfaccia Callable ha un unico metodo di chiamata e rappresenta un'attività che ha un valore. Ecco perché il metodo call restituisce un valore. Può anche generare eccezioni. Callable viene generalmente utilizzato nelle istanze di ExecutorService per avviare un'attività asincrona e quindi chiamare l' istanza Future restituita per ottenere il suo valore.

Q5. Che cos'è un thread daemon, quali sono i suoi casi d'uso? Come puoi creare un thread daemon?

Un thread daemon è un thread che non impedisce la chiusura di JVM. Quando tutti i thread non daemon vengono terminati, la JVM abbandona semplicemente tutti i thread daemon rimanenti. I thread daemon vengono solitamente utilizzati per eseguire alcune attività di supporto o di servizio per altri thread, ma è necessario tenere presente che possono essere abbandonati in qualsiasi momento.

Per avviare un thread come daemon, dovresti usare il metodo setDaemon () prima di chiamare start () :

Thread daemon = new Thread(() -> System.out.println("Hello from daemon!")); daemon.setDaemon(true); daemon.start();

Curiosamente, se lo esegui come parte del metodo main () , il messaggio potrebbe non essere stampato. Ciò potrebbe accadere se il thread main () terminasse prima che il demone arrivasse al punto di stampare il messaggio. In genere non dovrebbe fare alcun I / O nel daemon discussioni, in quanto non sarà nemmeno in grado di eseguire i loro infine blocchi e chiudere le risorse, se abbandonati.

Q6. Qual è il flag di interruzione del thread? Come puoi impostarlo e verificarlo? Come si relaziona con l'interruzione dell'eccezione?

Il flag di interrupt, o stato di interrupt, è un flag di thread interno che viene impostato quando il thread viene interrotto. Per impostarlo, chiama semplicemente thread.interrupt () sull'oggetto thread .

Se un thread è attualmente all'interno di uno dei metodi che generano InterructedException ( wait , join , sleep ecc.), Questo metodo genera immediatamente InterructedException. Il thread è libero di elaborare questa eccezione secondo la propria logica.

Se un thread non è all'interno di tale metodo e viene chiamato thread.interrupt () , non accade nulla di speciale. È responsabilità del thread controllare periodicamente lo stato di interruzione utilizzando il metodo statico Thread.interructed () o l'istanza isInterructed () . La differenza tra questi metodi è che il Thread.interructed () statico cancella il flag di interrupt, mentre isInterructed () no.

Q7. What Are Executor and Executorservice? What Are the Differences Between These Interfaces?

Executor and ExecutorService are two related interfaces of java.util.concurrent framework. Executor is a very simple interface with a single execute method accepting Runnable instances for execution. In most cases, this is the interface that your task-executing code should depend on.

ExecutorService extends the Executor interface with multiple methods for handling and checking the lifecycle of a concurrent task execution service (termination of tasks in case of shutdown) and methods for more complex asynchronous task handling including Futures.

For more info on using Executor and ExecutorService, see the article A Guide to Java ExecutorService.

Q8. What Are the Available Implementations of Executorservice in the Standard Library?

The ExecutorService interface has three standard implementations:

  • ThreadPoolExecutor — for executing tasks using a pool of threads. Once a thread is finished executing the task, it goes back into the pool. If all threads in the pool are busy, then the task has to wait for its turn.
  • ScheduledThreadPoolExecutor allows to schedule task execution instead of running it immediately when a thread is available. It can also schedule tasks with fixed rate or fixed delay.
  • ForkJoinPool is a special ExecutorService for dealing with recursive algorithms tasks. If you use a regular ThreadPoolExecutor for a recursive algorithm, you will quickly find all your threads are busy waiting for the lower levels of recursion to finish. The ForkJoinPool implements the so-called work-stealing algorithm that allows it to use available threads more efficiently.

Q9. What Is Java Memory Model (Jmm)? Describe Its Purpose and Basic Ideas.

Java Memory Model is a part of Java language specification described in Chapter 17.4. It specifies how multiple threads access common memory in a concurrent Java application, and how data changes by one thread are made visible to other threads. While being quite short and concise, JMM may be hard to grasp without strong mathematical background.

The need for memory model arises from the fact that the way your Java code is accessing data is not how it actually happens on the lower levels. Memory writes and reads may be reordered or optimized by the Java compiler, JIT compiler, and even CPU, as long as the observable result of these reads and writes is the same.

This can lead to counter-intuitive results when your application is scaled to multiple threads because most of these optimizations take into account a single thread of execution (the cross-thread optimizers are still extremely hard to implement). Another huge problem is that the memory in modern systems is multilayered: multiple cores of a processor may keep some non-flushed data in their caches or read/write buffers, which also affects the state of the memory observed from other cores.

To make things worse, the existence of different memory access architectures would break the Java's promise of “write once, run everywhere”. Happily for the programmers, the JMM specifies some guarantees that you may rely upon when designing multithreaded applications. Sticking to these guarantees helps a programmer to write multithreaded code that is stable and portable between various architectures.

The main notions of JMM are:

  • Actions, these are inter-thread actions that can be executed by one thread and detected by another thread, like reading or writing variables, locking/unlocking monitors and so on
  • Synchronization actions, a certain subset of actions, like reading/writing a volatile variable, or locking/unlocking a monitor
  • Program Order (PO), the observable total order of actions inside a single thread
  • Synchronization Order (SO), the total order between all synchronization actions — it has to be consistent with Program Order, that is, if two synchronization actions come one before another in PO, they occur in the same order in SO
  • synchronizes-with (SW) relation between certain synchronization actions, like unlocking of monitor and locking of the same monitor (in another or the same thread)
  • Happens-before Order — combines PO with SW (this is called transitive closure in set theory) to create a partial ordering of all actions between threads. If one action happens-before another, then the results of the first action are observable by the second action (for instance, write of a variable in one thread and read in another)
  • Happens-before consistency — a set of actions is HB-consistent if every read observes either the last write to that location in the happens-before order, or some other write via data race
  • Execution — a certain set of ordered actions and consistency rules between them

For a given program, we can observe multiple different executions with various outcomes. But if a program is correctly synchronized, then all of its executions appear to be sequentially consistent, meaning you can reason about the multithreaded program as a set of actions occurring in some sequential order. This saves you the trouble of thinking about under-the-hood reorderings, optimizations or data caching.

Q10. What Is a Volatile Field and What Guarantees Does the Jmm Hold for Such Field?

A volatile field has special properties according to the Java Memory Model (see Q9). The reads and writes of a volatile variable are synchronization actions, meaning that they have a total ordering (all threads will observe a consistent order of these actions). A read of a volatile variable is guaranteed to observe the last write to this variable, according to this order.

If you have a field that is accessed from multiple threads, with at least one thread writing to it, then you should consider making it volatile, or else there is a little guarantee to what a certain thread would read from this field.

Another guarantee for volatile is atomicity of writing and reading 64-bit values (long and double). Without a volatile modifier, a read of such field could observe a value partly written by another thread.

Q11. Which of the Following Operations Are Atomic?

  • writing to a non-volatileint;
  • writing to a volatile int;
  • writing to a non-volatile long;
  • writing to a volatile long;
  • incrementing a volatile long?

A write to an int (32-bit) variable is guaranteed to be atomic, whether it is volatile or not. A long (64-bit) variable could be written in two separate steps, for example, on 32-bit architectures, so by default, there is no atomicity guarantee. However, if you specify the volatile modifier, a long variable is guaranteed to be accessed atomically.

The increment operation is usually done in multiple steps (retrieving a value, changing it and writing back), so it is never guaranteed to be atomic, wether the variable is volatile or not. If you need to implement atomic increment of a value, you should use classes AtomicInteger, AtomicLong etc.

Q12. What Special Guarantees Does the Jmm Hold for Final Fields of a Class?

JVM basically guarantees that final fields of a class will be initialized before any thread gets hold of the object. Without this guarantee, a reference to an object may be published, i.e. become visible, to another thread before all the fields of this object are initialized, due to reorderings or other optimizations. This could cause racy access to these fields.

This is why, when creating an immutable object, you should always make all its fields final, even if they are not accessible via getter methods.

Q13. What Is the Meaning of a Synchronized Keyword in the Definition of a Method? of a Static Method? Before a Block?

The synchronized keyword before a block means that any thread entering this block has to acquire the monitor (the object in brackets). If the monitor is already acquired by another thread, the former thread will enter the BLOCKED state and wait until the monitor is released.

synchronized(object) { // ... }

A synchronized instance method has the same semantics, but the instance itself acts as a monitor.

synchronized void instanceMethod() { // ... }

For a static synchronized method, the monitor is the Class object representing the declaring class.

static synchronized void staticMethod() { // ... }

Q14. If Two Threads Call a Synchronized Method on Different Object Instances Simultaneously, Could One of These Threads Block? What If the Method Is Static?

If the method is an instance method, then the instance acts as a monitor for the method. Two threads calling the method on different instances acquire different monitors, so none of them gets blocked.

If the method is static, then the monitor is the Class object. For both threads, the monitor is the same, so one of them will probably block and wait for another to exit the synchronized method.

Q15. What Is the Purpose of the Wait, Notify and Notifyall Methods of the Object Class?

A thread that owns the object's monitor (for instance, a thread that has entered a synchronized section guarded by the object) may call object.wait() to temporarily release the monitor and give other threads a chance to acquire the monitor. This may be done, for instance, to wait for a certain condition.

When another thread that acquired the monitor fulfills the condition, it may call object.notify() or object.notifyAll() and release the monitor. The notify method awakes a single thread in the waiting state, and the notifyAll method awakes all threads that wait for this monitor, and they all compete for re-acquiring the lock.

The following BlockingQueue implementation shows how multiple threads work together via the wait-notify pattern. If we put an element into an empty queue, all threads that were waiting in the take method wake up and try to receive the value. If we put an element into a full queue, the put method waits for the call to the get method. The get method removes an element and notifies the threads waiting in the put method that the queue has an empty place for a new item.

public class BlockingQueue { private List queue = new LinkedList(); private int limit = 10; public synchronized void put(T item) { while (queue.size() == limit) { try { wait(); } catch (InterruptedException e) {} } if (queue.isEmpty()) { notifyAll(); } queue.add(item); } public synchronized T take() throws InterruptedException { while (queue.isEmpty()) { try { wait(); } catch (InterruptedException e) {} } if (queue.size() == limit) { notifyAll(); } return queue.remove(0); } }

Q16. Describe the Conditions of Deadlock, Livelock, and Starvation. Describe the Possible Causes of These Conditions.

Deadlock is a condition within a group of threads that cannot make progress because every thread in the group has to acquire some resource that is already acquired by another thread in the group. The most simple case is when two threads need to lock both of two resources to progress, the first resource is already locked by one thread, and the second by another. These threads will never acquire a lock to both resources and thus will never progress.

Livelock is a case of multiple threads reacting to conditions, or events, generated by themselves. An event occurs in one thread and has to be processed by another thread. During this processing, a new event occurs which has to be processed in the first thread, and so on. Such threads are alive and not blocked, but still, do not make any progress because they overwhelm each other with useless work.

Starvation is a case of a thread unable to acquire resource because other thread (or threads) occupy it for too long or have higher priority. A thread cannot make progress and thus is unable to fulfill useful work.

Q17. Describe the Purpose and Use-Cases of the Fork/Join Framework.

The fork/join framework allows parallelizing recursive algorithms. The main problem with parallelizing recursion using something like ThreadPoolExecutor is that you may quickly run out of threads because each recursive step would require its own thread, while the threads up the stack would be idle and waiting.

The fork/join framework entry point is the ForkJoinPool class which is an implementation of ExecutorService. It implements the work-stealing algorithm, where idle threads try to “steal” work from busy threads. This allows to spread the calculations between different threads and make progress while using fewer threads than it would require with a usual thread pool.

Ulteriori informazioni ed esempi di codice per il framework fork / join possono essere trovati nell'articolo "Guide to the Fork / Join Framework in Java".

Successivo » Domande di intervista per l'inizializzazione e la struttura della classe Java « Domande per l'intervista al sistema di tipo Java precedente