由于Hadoop的MapReduce和HDFS都有通信的需求,需要对通信的对象进行序列化。Hadoop并没有采用Java的序列化,而是基于java.io里的DataOutput和DataInput引入了它自己的系统,一个简单高效的序列化协议。
org.apache.hadoop.io中定义了大量的可序列化对象,他们都实现了Writable接口。
package org.apache.hadoop.io; import java.io.DataOutput; import java.io.DataInput; import java.io.IOException; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; /** * A serializable object which implements a simple, efficient, serialization * protocol, based on {@link DataInput} and {@link DataOutput}. * * <p>Any <code>key</code> or <code>value</code> type in the Hadoop Map-Reduce * framework implements this interface.</p> * * <p>Implementations typically implement a static <code>read(DataInput)</code> * method which constructs a new instance, calls {@link #readFields(DataInput)} * and returns the instance.</p> * * <p>Example:</p> * <p><blockquote><pre> * public class MyWritable implements Writable { * // Some data * private int counter; * private long timestamp; * * public void write(DataOutput out) throws IOException { * out.writeInt(counter); * out.writeLong(timestamp); * } * * public void readFields(DataInput in) throws IOException { * counter = in.readInt(); * timestamp = in.readLong(); * } * * public static MyWritable read(DataInput in) throws IOException { * MyWritable w = new MyWritable(); * w.readFields(in); * return w; * } * } * </pre></blockquote></p> */ @InterfaceAudience.Public @InterfaceStability.Stable public interface Writable { /** * Serialize the fields of this object to <code>out</code>. * * @param out <code>DataOuput</code> to serialize this object into. * @throws IOException */ void write(DataOutput out) throws IOException; /** * Deserialize the fields of this object from <code>in</code>. * * <p>For efficiency, implementations should attempt to re-use storage in the * existing object where possible.</p> * * @param in <code>DataInput</code> to deseriablize this object from. * @throws IOException */ void readFields(DataInput in) throws IOException; }
在源码的开始,是对Writable接口的一个介绍,并且实现了Writable接口的一个典型例子。
接口类包含有两个注解,而注解的作用主要是用来标示一些类和方法,下面分别做介绍:
主要用于说明使用的范围:
interfaceAudience 类包含三个注解类型,用来被说明被他们注解的类型的潜在的使用范围(audience)。
@InterfaceAudience.Public: 对所有工程和应用可用
@InterfaceAudience.LimitedPrivate: 仅限于某些特定工程,如Comomn,HDFS等
@InterfaceAudience.Private: 仅限于Hadoop
package org.apache.hadoop.classification; import java.lang.annotation.Documented; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; /** * Annotation to inform users of a package, class or method's intended audience. * Currently the audience can be {@link Public}, {@link LimitedPrivate} or * {@link Private}. <br> * All public classes must have InterfaceAudience annotation. <br> * <ul> * <li>Public classes that are not marked with this annotation must be * considered by default as {@link Private}.</li> * * <li>External applications must only use classes that are marked * {@link Public}. Avoid using non public classes as these classes * could be removed or change in incompatible ways.</li> * * <li>Hadoop projects must only use classes that are marked * {@link LimitedPrivate} or {@link Public}</li> * * <li> Methods may have a different annotation that it is more restrictive * compared to the audience classification of the class. Example: A class * might be {@link Public}, but a method may be {@link LimitedPrivate} * </li></ul> */ @InterfaceAudience.Public @InterfaceStability.Evolving public class InterfaceAudience { /** * Intended for use by any project or application. */ @Documented @Retention(RetentionPolicy.RUNTIME) public @interface Public {}; /** * Intended only for the project(s) specified in the annotation. * For example, "Common", "HDFS", "MapReduce", "ZooKeeper", "HBase". */ @Documented @Retention(RetentionPolicy.RUNTIME) public @interface LimitedPrivate { String[] value(); }; /** * Intended for use only within Hadoop itself. */ @Documented @Retention(RetentionPolicy.RUNTIME) public @interface Private {}; private InterfaceAudience() {} // Audience can't exist on its own }
包含三个注解,用于说明被他们注解的类型的稳定性,及有特定包的依赖:
@InterfaceStability.Stable: 主版本是稳定的,不同主版本间可能不兼容
@InterfaceStability.Evolving: 不断变化,不同次版本间可能不兼容
@InterfaceStability.Unstable: 没有任何可靠性和健壮性保证,即不能提供稳定性和可靠性
package org.apache.hadoop.classification; import java.lang.annotation.Documented; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceAudience.Public; /** * Annotation to inform users of how much to rely on a particular package, * class or method not changing over time. Currently the stability can be * {@link Stable}, {@link Evolving} or {@link Unstable}. <br> * * <ul><li>All classes that are annotated with {@link Public} or * {@link LimitedPrivate} must have InterfaceStability annotation. </li> * <li>Classes that are {@link Private} are to be considered unstable unless * a different InterfaceStability annotation states otherwise.</li> * <li>Incompatible changes must not be made to classes marked as stable.</li> * </ul> */ @InterfaceAudience.Public @InterfaceStability.Evolving public class InterfaceStability { /** * Can evolve while retaining compatibility for minor release boundaries.; * can break compatibility only at major release (ie. at m.0). */ @Documented @Retention(RetentionPolicy.RUNTIME) public @interface Stable {}; /** * Evolving, but can break compatibility at minor release (i.e. m.x) */ @Documented @Retention(RetentionPolicy.RUNTIME) public @interface Evolving {}; /** * No guarantee is provided as to reliability or stability across any * level of release granularity. */ @Documented @Retention(RetentionPolicy.RUNTIME) public @interface Unstable {}; }
write实现把对象序列化的功能;
readFields实现把对象反序列化的功能。
版权声明:本文为博主原创文章,未经博主允许不得转载。
原文:http://blog.csdn.net/hadoop_/article/details/49174585