The more » PCCS heat exchangers remove core decay power by free convection and transfer this energy to an external pool of water located above containment. The SBWR is an advanced design that relies on a passive containment cooling system (PCCS) to remove thermal loads from the dry well. This paper presents work that has been used to support the US Nuclear Regulatory Commission's evaluation of General Electric's simplified boiling water reactor (SBWR). This preliminary assessment was made using data from the University of California, Berkeley (UCB), natural circulation loop test facility. « lessĪs part of a research effort to better understand passive heat removal dynamics, a series of numerical steady-state simulations in the presence of noncondensable gases was performed to evaluate RELAP5/MOD3 against test data. The cell length term in the condensation heat transfer correlation implemented in the code must be removed to allow for accurate calculations with smaller cell sizes. Hence, the UCB correlation predicts condensation heat transfer in the presence of noncondensable gases with only a coarse mesh. The three-node model has a large cell in the entrance region which smeared out the entrance effects on the heat transfer, which tend to overpredict the condensation. I hope this gives an idea of what you're seeing in your table.= 5% of the data with a three-node model. Thus, in our table, we must have a weightage of 0 for feature x' in PC1 and a weightage of 0 for feature y' in PC2. It's clear that PC1 is not pulled in the direction of feature x', and as isn't PC2 in the direction of feature y'. Let us call the green principle component as PC1 and the pink one as PC2. Next, we have an example with uncorrelated data. Thus, if we were to make a principle component breakdown table like you made, we would expect to see some weightage from both Feature 1 and Feature 2 explaining PC1 and PC2. We can visually see that both eigenvectors derived from PCA are being "pulled" in both the Feature 1 and Feature 2 directions. The following shows an example of running PCA on correlated data. PC1 will be pointing most to the direction of Feature E relative to other directions.įor a visualization of this, look at the following figures taken from here and here: In each principle component, features that have a greater absolute weight "pull" the principle component more to that feature's direction.įor example, we can say that in PC1, since Feature A, Feature B, Feature I, and Feature J have relatively low weights (in absolute value), PC1 is not as much pointing in the direction of these features in the feature space. The Principle Component breakdown by features that you have there basically tells you the "direction" each principle component points to in terms of the direction of the features. So on the PC1 the feature named e is the most important and on PC2 the d. Most_important_names = ] for i in range(n_pcs)]ĭic = Most_important = ).argmax() for i in range(n_pcs)] # get the index of the most important feature on EACH component Model = PCA(n_components=2).fit(train_features) TO get the most important features on the PCs with names and save them into a pandas dataframe use this: from composition import PCA The important features are the ones that influence more the components and thus, have a large absolute value on the component. Plt.scatter(xs ,ys, c = y) #without scaling #In general it is a good idea to scale the data So the higher the value in absolute value, the higher the influence on the principal component.Īfter performing the PCA analysis, people usually plot the known 'biplot' to see the transformed features in the N dimensions (2 in our case) and the original variables (features).Įxample using iris data: import numpy as npįrom sklearn.preprocessing import StandardScaler This value tells us 'how much' the feature influences the PC (in our case the PC1). In your case, the value -0.56 for Feature E is the score of this feature on the PC1. Summary in an article: Python compact guide: PART2: I explain how to check the importance of the features and how to save them into a pandas dataframe using the feature names. PART1: I explain how to check the importance of the features and how to plot a biplot. Terminology: First of all, the results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score).
0 Comments
Leave a Reply. |