|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.hadoop.hdfs.server.namenode.INode
org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields
org.apache.hadoop.hdfs.server.namenode.INodeDirectory
public class INodeDirectory
Directory INode class.
| Nested Class Summary | |
|---|---|
static class |
INodeDirectory.SnapshotAndINode
A pair of Snapshot and INode objects. |
| Nested classes/interfaces inherited from class org.apache.hadoop.hdfs.server.namenode.INode |
|---|
INode.BlocksMapUpdateInfo, INode.Feature |
| Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes |
|---|
INodeDirectoryAttributes.CopyWithQuota, INodeDirectoryAttributes.SnapshotCopy |
| Field Summary | |
|---|---|
protected static int |
DEFAULT_FILES_PER_DIRECTORY
|
| Fields inherited from class org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields |
|---|
features |
| Fields inherited from class org.apache.hadoop.hdfs.server.namenode.INode |
|---|
LOG |
| Constructor Summary | |
|---|---|
INodeDirectory(INodeDirectory other,
boolean adopt,
INode.Feature... featuresToCopy)
Copy constructor |
|
INodeDirectory(long id,
byte[] name,
org.apache.hadoop.fs.permission.PermissionStatus permissions,
long mtime)
constructor |
|
| Method Summary | |
|---|---|
boolean |
addChild(org.apache.hadoop.hdfs.server.namenode.INode node)
|
boolean |
addChild(org.apache.hadoop.hdfs.server.namenode.INode node,
boolean setModTime,
int latestSnapshotId)
Add a child inode to the directory. |
org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot |
addSnapshot(int id,
String name)
|
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature |
addSnapshotFeature(DirectoryWithSnapshotFeature.DirectoryDiffList diffs)
|
void |
addSnapshottableFeature()
add DirectorySnapshottableFeature |
void |
addSpaceConsumed(long nsDelta,
long dsDelta,
boolean verify)
Check and add namespace/diskspace consumed to itself and the ancestors. |
INodeDirectory |
asDirectory()
Cast this inode to an INodeDirectory. |
Quota.Counts |
cleanSubtree(int snapshotId,
int priorSnapshotId,
INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes,
boolean countDiffChange)
Clean the subtree under this inode and collect the blocks from the descents for further block deletion/update. |
Quota.Counts |
cleanSubtreeRecursively(int snapshot,
int prior,
INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes,
Map<org.apache.hadoop.hdfs.server.namenode.INode,org.apache.hadoop.hdfs.server.namenode.INode> excludedNodes,
boolean countDiffChange)
Call cleanSubtree(..) recursively down the subtree. |
void |
clear()
Clear references to other objects. |
void |
clearChildren()
Set the children list to null. |
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext |
computeContentSummary(org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
Count subtree content summary with a Content.Counts. |
Quota.Counts |
computeQuotaUsage(Quota.Counts counts,
boolean useCache,
int lastSnapshotId)
Count subtree Quota.NAMESPACE and Quota.DISKSPACE usages. |
Quota.Counts |
computeQuotaUsage4CurrentDirectory(Quota.Counts counts)
Add quota usage for this inode excluding children. |
void |
destroyAndCollectBlocks(INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes)
Destroy self and clear everything! If the INode is a file, this method collects its blocks for further block deletion. |
void |
dumpTreeRecursively(PrintWriter out,
StringBuilder prefix,
int snapshot)
Dump tree recursively. |
static void |
dumpTreeRecursively(PrintWriter out,
StringBuilder prefix,
Iterable<INodeDirectory.SnapshotAndINode> subs)
Dump the given subtrees. |
org.apache.hadoop.hdfs.server.namenode.INode |
getChild(byte[] name,
int snapshotId)
|
org.apache.hadoop.hdfs.util.ReadOnlyList<org.apache.hadoop.hdfs.server.namenode.INode> |
getChildrenList(int snapshotId)
|
int |
getChildrenNum(int snapshotId)
|
DirectoryWithSnapshotFeature.DirectoryDiffList |
getDiffs()
|
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature |
getDirectorySnapshottableFeature()
|
DirectoryWithQuotaFeature |
getDirectoryWithQuotaFeature()
If the directory contains a DirectoryWithQuotaFeature, return it;
otherwise, return null. |
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature |
getDirectoryWithSnapshotFeature()
If feature list contains a DirectoryWithSnapshotFeature, return it;
otherwise, return null. |
byte |
getLocalStoragePolicyID()
|
Quota.Counts |
getQuotaCounts()
Get the quota set for this inode |
org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot |
getSnapshot(byte[] snapshotName)
|
org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes |
getSnapshotINode(int snapshotId)
|
byte |
getStoragePolicyID()
|
boolean |
isDirectory()
Check whether it's a directory |
boolean |
isSnapshottable()
|
boolean |
isWithSnapshot()
Is this file has the snapshot feature? |
boolean |
metadataEquals(org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes other)
Compare the metadata with another INodeDirectory |
void |
recordModification(int latestSnapshotId)
This inode is being modified. |
boolean |
removeChild(org.apache.hadoop.hdfs.server.namenode.INode child)
Remove the specified child from this directory. |
boolean |
removeChild(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId)
Remove the specified child from this directory. |
org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot |
removeSnapshot(String snapshotName,
INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes)
|
void |
removeSnapshottableFeature()
remove DirectorySnapshottableFeature |
void |
renameSnapshot(String path,
String oldName,
String newName)
|
void |
replaceChild(org.apache.hadoop.hdfs.server.namenode.INode oldChild,
org.apache.hadoop.hdfs.server.namenode.INode newChild,
INodeMap inodeMap)
Replace the given child with a new child. |
org.apache.hadoop.hdfs.server.namenode.INode |
saveChild2Snapshot(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId,
org.apache.hadoop.hdfs.server.namenode.INode snapshotCopy)
Save the child to the latest snapshot. |
int |
searchChild(org.apache.hadoop.hdfs.server.namenode.INode inode)
Search for the given INode in the children list and the deleted lists of snapshots. |
void |
setSnapshotQuota(int snapshotQuota)
|
String |
toDetailString()
|
void |
undoRename4DstParent(org.apache.hadoop.hdfs.server.namenode.INode deletedChild,
int latestSnapshotId)
Undo the rename operation for the dst tree, i.e., if the rename operation (with OVERWRITE option) removes a file/dir from the dst tree, add it back and delete possible record in the deleted list. |
void |
undoRename4ScrParent(INodeReference oldChild,
org.apache.hadoop.hdfs.server.namenode.INode newChild)
This method is usually called by the undo section of rename. |
static INodeDirectory |
valueOf(org.apache.hadoop.hdfs.server.namenode.INode inode,
Object path)
Cast INode to INodeDirectory. |
| Methods inherited from class org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields |
|---|
addAclFeature, addFeature, addXAttrFeature, getFeature, getFeatures, getFsPermissionShort, getId, getLocalNameBytes, getNext, getPermissionLong, removeAclFeature, removeFeature, removeXAttrFeature, setAccessTime, setLocalName, setModificationTime, setNext, updateModificationTime |
| Methods inherited from class org.apache.hadoop.hdfs.server.namenode.INode |
|---|
asFile, asReference, asSymlink, compareTo, computeAndConvertContentSummary, computeContentSummary, computeQuotaUsage, computeQuotaUsage, dumpTreeRecursively, dumpTreeRecursively, equals, getAccessTime, getAclFeature, getFsPermission, getFullPathName, getGroupName, getKey, getLocalName, getModificationTime, getObjectString, getParent, getParentReference, getParentString, getPathComponents, getPathNames, getUserName, getXAttrFeature, hashCode, isAncestorDirectory, isFile, isInLatestSnapshot, isQuotaSet, isReference, isSymlink, setAccessTime, setModificationTime, setParent, setParentReference, shouldRecordInSrcSnapshot, toString |
| Methods inherited from class java.lang.Object |
|---|
clone, finalize, getClass, notify, notifyAll, wait, wait, wait |
| Methods inherited from interface org.apache.hadoop.hdfs.server.namenode.INodeAttributes |
|---|
getAccessTime, getAclFeature, getFsPermission, getFsPermissionShort, getGroupName, getLocalNameBytes, getModificationTime, getPermissionLong, getUserName, getXAttrFeature |
| Field Detail |
|---|
protected static final int DEFAULT_FILES_PER_DIRECTORY
| Constructor Detail |
|---|
public INodeDirectory(long id,
byte[] name,
org.apache.hadoop.fs.permission.PermissionStatus permissions,
long mtime)
public INodeDirectory(INodeDirectory other,
boolean adopt,
INode.Feature... featuresToCopy)
other - The INodeDirectory to be copiedadopt - Indicate whether or not need to set the parent field of child
INodes to the new nodefeaturesToCopy - any number of features to copy to the new node.
The method will do a reference copy, not a deep copy.| Method Detail |
|---|
public static INodeDirectory valueOf(org.apache.hadoop.hdfs.server.namenode.INode inode,
Object path)
throws FileNotFoundException,
org.apache.hadoop.fs.PathIsNotDirectoryException
FileNotFoundException
org.apache.hadoop.fs.PathIsNotDirectoryExceptionpublic final boolean isDirectory()
org.apache.hadoop.hdfs.server.namenode.INode
isDirectory in class org.apache.hadoop.hdfs.server.namenode.INodepublic final INodeDirectory asDirectory()
org.apache.hadoop.hdfs.server.namenode.INodeINodeDirectory.
asDirectory in class org.apache.hadoop.hdfs.server.namenode.INodepublic byte getLocalStoragePolicyID()
getLocalStoragePolicyID in class org.apache.hadoop.hdfs.server.namenode.INodeBlockStoragePolicySuite.ID_UNSPECIFIED if no policy has
been specified.public byte getStoragePolicyID()
getStoragePolicyID in class org.apache.hadoop.hdfs.server.namenode.INodepublic Quota.Counts getQuotaCounts()
org.apache.hadoop.hdfs.server.namenode.INode
getQuotaCounts in interface org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributesgetQuotaCounts in class org.apache.hadoop.hdfs.server.namenode.INode
public void addSpaceConsumed(long nsDelta,
long dsDelta,
boolean verify)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
org.apache.hadoop.hdfs.server.namenode.INode
addSpaceConsumed in class org.apache.hadoop.hdfs.server.namenode.INodeorg.apache.hadoop.hdfs.protocol.QuotaExceededException - if quote is violated.public final DirectoryWithQuotaFeature getDirectoryWithQuotaFeature()
DirectoryWithQuotaFeature, return it;
otherwise, return null.
public org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature addSnapshotFeature(DirectoryWithSnapshotFeature.DirectoryDiffList diffs)
public final org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature getDirectoryWithSnapshotFeature()
DirectoryWithSnapshotFeature, return it;
otherwise, return null.
public final boolean isWithSnapshot()
public DirectoryWithSnapshotFeature.DirectoryDiffList getDiffs()
public org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes getSnapshotINode(int snapshotId)
getSnapshotINode in class org.apache.hadoop.hdfs.server.namenode.INodeSnapshot.CURRENT_STATE_ID,
return this; otherwise return the corresponding snapshot inode.public String toDetailString()
toDetailString in class org.apache.hadoop.hdfs.server.namenode.INodepublic org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature getDirectorySnapshottableFeature()
public boolean isSnapshottable()
public org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot getSnapshot(byte[] snapshotName)
public void setSnapshotQuota(int snapshotQuota)
public org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot addSnapshot(int id,
String name)
throws SnapshotException,
org.apache.hadoop.hdfs.protocol.QuotaExceededException
SnapshotException
org.apache.hadoop.hdfs.protocol.QuotaExceededException
public org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot removeSnapshot(String snapshotName,
INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes)
throws SnapshotException
SnapshotException
public void renameSnapshot(String path,
String oldName,
String newName)
throws SnapshotException
SnapshotExceptionpublic void addSnapshottableFeature()
public void removeSnapshottableFeature()
public void replaceChild(org.apache.hadoop.hdfs.server.namenode.INode oldChild,
org.apache.hadoop.hdfs.server.namenode.INode newChild,
INodeMap inodeMap)
public void recordModification(int latestSnapshotId)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
org.apache.hadoop.hdfs.server.namenode.INode
recordModification in class org.apache.hadoop.hdfs.server.namenode.INodelatestSnapshotId - The id of the latest snapshot that has been taken.
Note that it is Snapshot.CURRENT_STATE_ID
if no snapshots have been taken.
org.apache.hadoop.hdfs.protocol.QuotaExceededException
public org.apache.hadoop.hdfs.server.namenode.INode saveChild2Snapshot(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId,
org.apache.hadoop.hdfs.server.namenode.INode snapshotCopy)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
org.apache.hadoop.hdfs.protocol.QuotaExceededException
public org.apache.hadoop.hdfs.server.namenode.INode getChild(byte[] name,
int snapshotId)
name - the name of the childsnapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the corresponding snapshot; otherwise, get the result from
the current directory.
public int searchChild(org.apache.hadoop.hdfs.server.namenode.INode inode)
Snapshot.CURRENT_STATE_ID if the inode is in the children
list; Snapshot.NO_SNAPSHOT_ID if the inode is neither in the
children list nor in any snapshot; otherwise the snapshot id of the
corresponding snapshot diff list.public org.apache.hadoop.hdfs.util.ReadOnlyList<org.apache.hadoop.hdfs.server.namenode.INode> getChildrenList(int snapshotId)
snapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the corresponding snapshot; otherwise, get the result from
the current directory.
public boolean removeChild(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
org.apache.hadoop.hdfs.protocol.QuotaExceededExceptionpublic boolean removeChild(org.apache.hadoop.hdfs.server.namenode.INode child)
child - the child inode to be removed
public boolean addChild(org.apache.hadoop.hdfs.server.namenode.INode node,
boolean setModTime,
int latestSnapshotId)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
node - INode to insertsetModTime - set modification time for the parent node
not needed when replaying the addition and
the parent already has the proper mod time
org.apache.hadoop.hdfs.protocol.QuotaExceededExceptionpublic boolean addChild(org.apache.hadoop.hdfs.server.namenode.INode node)
public Quota.Counts computeQuotaUsage(Quota.Counts counts,
boolean useCache,
int lastSnapshotId)
org.apache.hadoop.hdfs.server.namenode.INodeQuota.NAMESPACE and Quota.DISKSPACE usages.
With the existence of INodeReference, the same inode and its
subtree may be referred by multiple INodeReference.WithName nodes and a
INodeReference.DstReference node. To avoid circles while quota usage computation,
we have the following rules:
1. For aINodeReference.DstReferencenode, since the node must be in the current tree (or has been deleted as the end point of a series of rename operations), we compute the quota usage of the referred node (and its subtree) in the regular manner, i.e., including every inode in the current tree and in snapshot copies, as well as the size of diff list. 2. For aINodeReference.WithNamenode, since the node must be in a snapshot, we only count the quota usage for those nodes that still existed at the creation time of the snapshot associated with theINodeReference.WithNamenode. We do not count in the size of the diff list.
computeQuotaUsage in class org.apache.hadoop.hdfs.server.namenode.INodecounts - The subtree counts for returning.useCache - Whether to use cached quota usage. Note that
INodeReference.WithName node never uses cache for its subtree.lastSnapshotId - Snapshot.CURRENT_STATE_ID indicates the
computation is in the current tree. Otherwise the id
indicates the computation range for a
INodeReference.WithName node.
public Quota.Counts computeQuotaUsage4CurrentDirectory(Quota.Counts counts)
public org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext computeContentSummary(org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
org.apache.hadoop.hdfs.server.namenode.INodeContent.Counts.
computeContentSummary in class org.apache.hadoop.hdfs.server.namenode.INodesummary - the context object holding counts for the subtree.
public void undoRename4ScrParent(INodeReference oldChild,
org.apache.hadoop.hdfs.server.namenode.INode newChild)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
1) remove the WithName node from the deleted list (if it exists) 2) replace the WithName node in the created list with srcChild 3) add srcChild back as a child of srcParent. Note that we already add the node into the created list of a snapshot diff in step 2, we do not need to add srcChild to the created list of the latest snapshot.We do not need to update quota usage because the old child is in the deleted list before.
oldChild - The reference node to be removed/replacednewChild - The node to be added back
org.apache.hadoop.hdfs.protocol.QuotaExceededException - should not throw this exception
public void undoRename4DstParent(org.apache.hadoop.hdfs.server.namenode.INode deletedChild,
int latestSnapshotId)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
org.apache.hadoop.hdfs.protocol.QuotaExceededExceptionpublic void clearChildren()
public void clear()
org.apache.hadoop.hdfs.server.namenode.INode
clear in class org.apache.hadoop.hdfs.server.namenode.INode
public Quota.Counts cleanSubtreeRecursively(int snapshot,
int prior,
INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes,
Map<org.apache.hadoop.hdfs.server.namenode.INode,org.apache.hadoop.hdfs.server.namenode.INode> excludedNodes,
boolean countDiffChange)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
org.apache.hadoop.hdfs.protocol.QuotaExceededException
public void destroyAndCollectBlocks(INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes)
org.apache.hadoop.hdfs.server.namenode.INode
destroyAndCollectBlocks in class org.apache.hadoop.hdfs.server.namenode.INodecollectedBlocks - blocks collected from the descents for further block
deletion/update will be added to this map.removedINodes - INodes collected from the descents for further cleaning up of
inodeMap
public Quota.Counts cleanSubtree(int snapshotId,
int priorSnapshotId,
INode.BlocksMapUpdateInfo collectedBlocks,
List<org.apache.hadoop.hdfs.server.namenode.INode> removedINodes,
boolean countDiffChange)
throws org.apache.hadoop.hdfs.protocol.QuotaExceededException
org.apache.hadoop.hdfs.server.namenode.INodeIn general, we have the following rules. 1. When deleting a file/directory in the current tree, we have different actions according to the type of the node to delete. 1.1 The current inode (this) is anINodeFile. 1.1.1 Ifprioris null, there is no snapshot taken on ancestors before. Thus we simply destroy (i.e., to delete completely, no need to save snapshot copy) the current INode and collect its blocks for further cleansing. 1.1.2 Else do nothing since the current INode will be stored as a snapshot copy. 1.2 The current inode is anINodeDirectory. 1.2.1 Ifprioris null, there is no snapshot taken on ancestors before. Similarly, we destroy the whole subtree and collect blocks. 1.2.2 Else do nothing with the current INode. Recursively clean its children. 1.3 The current inode is a file with snapshot. Call recordModification(..) to capture the current states. Mark the INode as deleted. 1.4 The current inode is anINodeDirectorywith snapshot feature. Call recordModification(..) to capture the current states. Destroy files/directories created after the latest snapshot (i.e., the inodes stored in the created list of the latest snapshot). Recursively clean remaining children. 2. When deleting a snapshot. 2.1 To cleanINodeFile: do nothing. 2.2 To cleanINodeDirectory: recursively clean its children. 2.3 To clean INodeFile with snapshot: delete the corresponding snapshot in its diff list. 2.4 To cleanINodeDirectorywith snapshot: delete the corresponding snapshot in its diff list. Recursively clean its children.
cleanSubtree in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - The id of the snapshot to delete.
Snapshot.CURRENT_STATE_ID means to delete the current
file/directory.priorSnapshotId - The id of the latest snapshot before the to-be-deleted snapshot.
When deleting a current inode, this parameter captures the latest
snapshot.collectedBlocks - blocks collected from the descents for further block
deletion/update will be added to the given map.removedINodes - INodes collected from the descents for further cleaning up of
inodeMap
org.apache.hadoop.hdfs.protocol.QuotaExceededExceptionpublic boolean metadataEquals(org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes other)
metadataEquals in interface org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes
public void dumpTreeRecursively(PrintWriter out,
StringBuilder prefix,
int snapshot)
org.apache.hadoop.hdfs.server.namenode.INode
dumpTreeRecursively in class org.apache.hadoop.hdfs.server.namenode.INodeprefix - The prefix string that each line should print.
public static void dumpTreeRecursively(PrintWriter out,
StringBuilder prefix,
Iterable<INodeDirectory.SnapshotAndINode> subs)
prefix - The prefix string that each line should print.subs - The subtrees.public final int getChildrenNum(int snapshotId)
|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||